Mar 12 14:47:20 crc systemd[1]: Starting Kubernetes Kubelet... Mar 12 14:47:20 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:20 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 14:47:21 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 12 14:47:22 crc kubenswrapper[4832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:47:22 crc kubenswrapper[4832]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 14:47:22 crc kubenswrapper[4832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:47:22 crc kubenswrapper[4832]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:47:22 crc kubenswrapper[4832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 14:47:22 crc kubenswrapper[4832]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.058335 4832 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.071951 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.071984 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.071995 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072003 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072013 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072021 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072029 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072038 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072046 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072057 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072073 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072082 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072092 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072100 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072109 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072117 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072128 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072137 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072145 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072154 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072162 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072170 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072178 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072185 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072193 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072200 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072208 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072215 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072223 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072230 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072238 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072246 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072253 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072260 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072268 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072276 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072284 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072293 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072301 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072308 4832 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072316 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072324 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072332 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072339 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072346 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072355 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072362 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072372 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072380 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072387 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072394 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072402 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072410 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072418 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072425 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072433 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072440 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072448 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072456 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072464 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072474 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072483 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072492 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072499 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072539 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072546 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072554 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072562 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072573 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072582 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.072590 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072729 4832 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072746 4832 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072760 4832 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072771 4832 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072782 4832 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072791 4832 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072803 4832 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072814 4832 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072823 4832 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072832 4832 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072842 4832 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072851 4832 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072860 4832 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072869 4832 flags.go:64] FLAG: --cgroup-root="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072878 4832 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072887 4832 flags.go:64] FLAG: --client-ca-file="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072895 4832 flags.go:64] FLAG: --cloud-config="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072904 4832 flags.go:64] FLAG: --cloud-provider="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072913 4832 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072923 4832 flags.go:64] FLAG: --cluster-domain="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072931 4832 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072940 4832 flags.go:64] FLAG: --config-dir="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072949 4832 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072958 4832 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072969 4832 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072978 4832 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072987 4832 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.072996 4832 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073005 4832 flags.go:64] FLAG: --contention-profiling="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073014 4832 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073023 4832 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073032 4832 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073041 4832 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073052 4832 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073061 4832 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073069 4832 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073079 4832 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073088 4832 flags.go:64] FLAG: --enable-server="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073097 4832 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073108 4832 flags.go:64] FLAG: --event-burst="100" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073117 4832 flags.go:64] FLAG: --event-qps="50" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073126 4832 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073135 4832 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073144 4832 flags.go:64] FLAG: --eviction-hard="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073155 4832 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073163 4832 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073172 4832 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073181 4832 flags.go:64] FLAG: --eviction-soft="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073189 4832 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073199 4832 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073207 4832 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073216 4832 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073225 4832 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073234 4832 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073243 4832 flags.go:64] FLAG: --feature-gates="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073258 4832 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073267 4832 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073276 4832 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073285 4832 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073294 4832 flags.go:64] FLAG: --healthz-port="10248" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073303 4832 flags.go:64] FLAG: --help="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073312 4832 flags.go:64] FLAG: --hostname-override="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073321 4832 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073330 4832 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073339 4832 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073348 4832 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073356 4832 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073364 4832 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073373 4832 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073381 4832 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073390 4832 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073399 4832 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073409 4832 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073418 4832 flags.go:64] FLAG: --kube-reserved="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073427 4832 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073435 4832 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073444 4832 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073453 4832 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073461 4832 flags.go:64] FLAG: --lock-file="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073470 4832 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073479 4832 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073488 4832 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073527 4832 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073537 4832 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073546 4832 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073555 4832 flags.go:64] FLAG: --logging-format="text" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073564 4832 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073573 4832 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073582 4832 flags.go:64] FLAG: --manifest-url="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073590 4832 flags.go:64] FLAG: --manifest-url-header="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073602 4832 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073612 4832 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073622 4832 flags.go:64] FLAG: --max-pods="110" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073631 4832 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073641 4832 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073650 4832 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073659 4832 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073668 4832 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073677 4832 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073686 4832 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073704 4832 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073714 4832 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073724 4832 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073734 4832 flags.go:64] FLAG: --pod-cidr="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073744 4832 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073756 4832 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073765 4832 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073774 4832 flags.go:64] FLAG: --pods-per-core="0" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073784 4832 flags.go:64] FLAG: --port="10250" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073793 4832 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073802 4832 flags.go:64] FLAG: --provider-id="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073811 4832 flags.go:64] FLAG: --qos-reserved="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073820 4832 flags.go:64] FLAG: --read-only-port="10255" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073829 4832 flags.go:64] FLAG: --register-node="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073837 4832 flags.go:64] FLAG: --register-schedulable="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073847 4832 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073862 4832 flags.go:64] FLAG: --registry-burst="10" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073871 4832 flags.go:64] FLAG: --registry-qps="5" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073879 4832 flags.go:64] FLAG: --reserved-cpus="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073888 4832 flags.go:64] FLAG: --reserved-memory="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073899 4832 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073908 4832 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073917 4832 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073926 4832 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073934 4832 flags.go:64] FLAG: --runonce="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073943 4832 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073952 4832 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073962 4832 flags.go:64] FLAG: --seccomp-default="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073971 4832 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073980 4832 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073989 4832 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.073998 4832 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074007 4832 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074016 4832 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074024 4832 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074033 4832 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074041 4832 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074051 4832 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074060 4832 flags.go:64] FLAG: --system-cgroups="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074068 4832 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074082 4832 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074091 4832 flags.go:64] FLAG: --tls-cert-file="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074099 4832 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074109 4832 flags.go:64] FLAG: --tls-min-version="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074119 4832 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074128 4832 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074137 4832 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074146 4832 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074155 4832 flags.go:64] FLAG: --v="2" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074166 4832 flags.go:64] FLAG: --version="false" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074177 4832 flags.go:64] FLAG: --vmodule="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074187 4832 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.074197 4832 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074413 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074423 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074432 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074441 4832 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074479 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074488 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074496 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074527 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074535 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074543 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074551 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074558 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074566 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074574 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074582 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074589 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074597 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074604 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074612 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074620 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074628 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074635 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074643 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074650 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074658 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074666 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074673 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074682 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074690 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074698 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074705 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074714 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074722 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074730 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074739 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074749 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074758 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074768 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074776 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074783 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074791 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074799 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074807 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074815 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074822 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074830 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074838 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074848 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074856 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074864 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074871 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074879 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074887 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074894 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074902 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074909 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074916 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074924 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074932 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074939 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074947 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074954 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074962 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074970 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074981 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.074995 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.075003 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.075011 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.075025 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.075033 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.075043 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.080979 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.097215 4832 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.097268 4832 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097402 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097417 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097427 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097435 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097444 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097453 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097461 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097471 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097479 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097488 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097496 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097538 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097547 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097556 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097565 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097573 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097582 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097591 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097599 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097607 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097616 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097625 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097633 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097644 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097652 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097661 4832 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097669 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097679 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097689 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097700 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097709 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097722 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097738 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097751 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097763 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097775 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097786 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097797 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097807 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097817 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097830 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097842 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097852 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097862 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097871 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097880 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097890 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097900 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097910 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097919 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097929 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097940 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097950 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097959 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097968 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097976 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097985 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.097994 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098002 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098013 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098024 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098035 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098045 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098054 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098062 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098071 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098080 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098089 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098097 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098107 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098115 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.098129 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098374 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098389 4832 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098400 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098411 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098421 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098431 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098439 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098485 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098542 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098563 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098576 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098589 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098601 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098614 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098625 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098635 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098646 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098656 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098665 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098676 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098687 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098695 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098704 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098712 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098723 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098735 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098746 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098755 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098764 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098775 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098786 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098798 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098808 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098818 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098828 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098837 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098846 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098855 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098865 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098874 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098884 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098893 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098902 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098910 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098918 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098926 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098935 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098943 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098952 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098960 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098969 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098978 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098986 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.098995 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099003 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099011 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099020 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099029 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099038 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099046 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099055 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099066 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099076 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099085 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099098 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099109 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099121 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099133 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099144 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099154 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.099162 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.099204 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.099475 4832 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.121929 4832 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.125760 4832 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.125887 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.137558 4832 server.go:997] "Starting client certificate rotation" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.137594 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.137770 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.249834 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.260969 4832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.269561 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.309117 4832 log.go:25] "Validated CRI v1 runtime API" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.428786 4832 log.go:25] "Validated CRI v1 image API" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.431105 4832 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.441618 4832 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-12-14-42-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.441651 4832 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.461457 4832 manager.go:217] Machine: {Timestamp:2026-03-12 14:47:22.457882529 +0000 UTC m=+1.101896855 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4e30ce0f-4c5a-4dcd-a098-48ed124d926b BootID:c18c2a96-b70d-433d-bc7b-43bacf303c77 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:6a:49:c6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:6a:49:c6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d6:0d:87 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b8:76:48 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e3:e8:72 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fb:97:02 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:1a:43:fa:f5:fe Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:46:7c:48:27:85:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.462004 4832 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.462271 4832 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.467913 4832 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.468266 4832 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.468314 4832 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.468719 4832 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.468740 4832 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.469664 4832 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.469708 4832 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.470076 4832 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.470237 4832 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.494651 4832 kubelet.go:418] "Attempting to sync node with API server" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.494694 4832 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.494744 4832 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.494765 4832 kubelet.go:324] "Adding apiserver pod source" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.494784 4832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.509761 4832 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.522206 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.522816 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.522833 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.522926 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.522935 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.527415 4832 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529687 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529752 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529782 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529801 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529831 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529847 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529868 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529893 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529913 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529930 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529954 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.529971 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.530024 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.530881 4832 server.go:1280] "Started kubelet" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.531288 4832 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.531312 4832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.532430 4832 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 14:47:22 crc systemd[1]: Started Kubernetes Kubelet. Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.533423 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.536033 4832 server.go:460] "Adding debug handlers to kubelet server" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.538315 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.538484 4832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.538688 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.539205 4832 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.539309 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.539380 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.538558 4832 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.539454 4832 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.548955 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.549792 4832 factory.go:55] Registering systemd factory Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.549899 4832 factory.go:221] Registration of the systemd container factory successfully Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.550246 4832 factory.go:153] Registering CRI-O factory Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.550296 4832 factory.go:221] Registration of the crio container factory successfully Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.550377 4832 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.550401 4832 factory.go:103] Registering Raw factory Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.550418 4832 manager.go:1196] Started watching for new ooms in manager Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.551185 4832 manager.go:319] Starting recovery of all containers Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.566121 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c1f5e27b6fc19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,LastTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571729 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571819 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571840 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571861 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571878 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571895 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571912 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571930 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571951 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571969 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.571986 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572002 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572019 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572040 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572057 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572077 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572093 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572109 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572127 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572213 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572233 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572251 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572269 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572290 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572314 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572339 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572366 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572395 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572417 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572441 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572494 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572557 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572576 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572594 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572613 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572631 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572653 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572672 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572690 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572706 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572723 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572772 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572788 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572805 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572822 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572841 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572861 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572880 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572897 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572914 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572931 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.572948 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573025 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573045 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573065 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573083 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573102 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573121 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573140 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573158 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573174 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573192 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573215 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573243 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573266 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573291 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573314 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573333 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573352 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573369 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573387 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573404 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573422 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573439 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573456 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573473 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573491 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573535 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573554 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573574 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573590 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573614 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573637 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573657 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573677 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573694 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573710 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573729 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573748 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573766 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573783 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573800 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573818 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573835 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573853 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573880 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573897 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573917 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573935 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573954 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573972 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.573989 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574006 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574023 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574048 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574067 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574085 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574105 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574122 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574142 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574161 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574180 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574198 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574218 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574238 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574290 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574311 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574329 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574347 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574363 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574380 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574397 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574415 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574432 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574479 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574497 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574543 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574563 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574580 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574598 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574615 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574631 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574649 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574668 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574685 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574702 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574721 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574738 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574757 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574773 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574791 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574809 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574826 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574846 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574864 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574882 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574899 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574916 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574933 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574950 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574967 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.574984 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575000 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575018 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575036 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575052 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575068 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575086 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575103 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575122 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575142 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575158 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575176 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575194 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575212 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575229 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575247 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575266 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575293 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575313 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575330 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575348 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575365 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575384 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575403 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575420 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575438 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.575455 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581152 4832 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581202 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581224 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581241 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581255 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581269 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581281 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581294 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581306 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581318 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581330 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581342 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581355 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581369 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581384 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581396 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581417 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581431 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581443 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581456 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581469 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581484 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581498 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581540 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581553 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581568 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581581 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581594 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581608 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581624 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581638 4832 reconstruct.go:97] "Volume reconstruction finished" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.581648 4832 reconciler.go:26] "Reconciler: start to sync state" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.583099 4832 manager.go:324] Recovery completed Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.591642 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.592935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.592967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.592979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.593951 4832 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.593969 4832 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.593987 4832 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.614979 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.618344 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.618426 4832 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.618467 4832 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.618708 4832 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 14:47:22 crc kubenswrapper[4832]: W0312 14:47:22.619286 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.619346 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.639018 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.656560 4832 policy_none.go:49] "None policy: Start" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.658211 4832 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.658384 4832 state_mem.go:35] "Initializing new in-memory state store" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.719630 4832 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.733241 4832 manager.go:334] "Starting Device Plugin manager" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.733399 4832 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.733475 4832 server.go:79] "Starting device plugin registration server" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.734147 4832 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.734429 4832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.734911 4832 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.735067 4832 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.735140 4832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.741777 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.750774 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.835723 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.839130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.839206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.839231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.839287 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:22 crc kubenswrapper[4832]: E0312 14:47:22.841061 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.920632 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.923310 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.926006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.926172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.926282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.926542 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.926719 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.926790 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928214 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928378 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928443 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.928803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.929705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.929742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.929752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.929877 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.930026 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.930078 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.930282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.930538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.930667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.930972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931814 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.931643 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933894 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.933926 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.934567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.934698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.934812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.984946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985105 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985248 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985371 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985498 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985749 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985861 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.985971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.986162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.986309 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.986431 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.986584 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.986724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:22 crc kubenswrapper[4832]: I0312 14:47:22.986852 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.041964 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.043450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.043499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.043547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.043587 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.044117 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088660 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088754 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088831 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088860 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088992 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.088899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089044 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089267 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089268 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089381 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089381 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.089545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.151707 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.264361 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.271712 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.290442 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.305398 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.310365 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.342284 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.342349 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.444283 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.446002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.446038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.446049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.446069 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.446586 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.455372 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.455531 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.469138 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-cd624920e5aaff24639dcdcaecaab2fddcb91494e689f34ac95a8abdf20f6834 WatchSource:0}: Error finding container cd624920e5aaff24639dcdcaecaab2fddcb91494e689f34ac95a8abdf20f6834: Status 404 returned error can't find the container with id cd624920e5aaff24639dcdcaecaab2fddcb91494e689f34ac95a8abdf20f6834 Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.471438 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f4e64e52fe31f1a35cf502c6f663374093d22f317365b9168798ba01f0449c51 WatchSource:0}: Error finding container f4e64e52fe31f1a35cf502c6f663374093d22f317365b9168798ba01f0449c51: Status 404 returned error can't find the container with id f4e64e52fe31f1a35cf502c6f663374093d22f317365b9168798ba01f0449c51 Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.472882 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0aff6cdd45b1c2cba298bf4ae96503f8f0f4737693c692818f3342b2156c2fa4 WatchSource:0}: Error finding container 0aff6cdd45b1c2cba298bf4ae96503f8f0f4737693c692818f3342b2156c2fa4: Status 404 returned error can't find the container with id 0aff6cdd45b1c2cba298bf4ae96503f8f0f4737693c692818f3342b2156c2fa4 Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.473600 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a29a9568f1f363c2df78b3bd366238da1bca2f4d4bcaaa65bfae46a5b5d91e31 WatchSource:0}: Error finding container a29a9568f1f363c2df78b3bd366238da1bca2f4d4bcaaa65bfae46a5b5d91e31: Status 404 returned error can't find the container with id a29a9568f1f363c2df78b3bd366238da1bca2f4d4bcaaa65bfae46a5b5d91e31 Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.474946 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bc49e3c723809a0c6e5c9901c04f01df57d67e790269e83a6bca4920013f7da4 WatchSource:0}: Error finding container bc49e3c723809a0c6e5c9901c04f01df57d67e790269e83a6bca4920013f7da4: Status 404 returned error can't find the container with id bc49e3c723809a0c6e5c9901c04f01df57d67e790269e83a6bca4920013f7da4 Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.534718 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.623881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bc49e3c723809a0c6e5c9901c04f01df57d67e790269e83a6bca4920013f7da4"} Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.625164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cd624920e5aaff24639dcdcaecaab2fddcb91494e689f34ac95a8abdf20f6834"} Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.626303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f4e64e52fe31f1a35cf502c6f663374093d22f317365b9168798ba01f0449c51"} Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.627600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0aff6cdd45b1c2cba298bf4ae96503f8f0f4737693c692818f3342b2156c2fa4"} Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.627937 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.628069 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:23 crc kubenswrapper[4832]: I0312 14:47:23.629179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a29a9568f1f363c2df78b3bd366238da1bca2f4d4bcaaa65bfae46a5b5d91e31"} Mar 12 14:47:23 crc kubenswrapper[4832]: W0312 14:47:23.632894 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.632997 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:23 crc kubenswrapper[4832]: E0312 14:47:23.953217 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Mar 12 14:47:24 crc kubenswrapper[4832]: I0312 14:47:24.247636 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:24 crc kubenswrapper[4832]: I0312 14:47:24.249358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:24 crc kubenswrapper[4832]: I0312 14:47:24.249410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:24 crc kubenswrapper[4832]: I0312 14:47:24.249419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:24 crc kubenswrapper[4832]: I0312 14:47:24.249443 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:24 crc kubenswrapper[4832]: E0312 14:47:24.249885 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Mar 12 14:47:24 crc kubenswrapper[4832]: I0312 14:47:24.470409 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:47:24 crc kubenswrapper[4832]: E0312 14:47:24.471692 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:24 crc kubenswrapper[4832]: I0312 14:47:24.534664 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:25 crc kubenswrapper[4832]: W0312 14:47:25.385238 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:25 crc kubenswrapper[4832]: E0312 14:47:25.385359 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:25 crc kubenswrapper[4832]: I0312 14:47:25.535096 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:25 crc kubenswrapper[4832]: E0312 14:47:25.562228 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Mar 12 14:47:25 crc kubenswrapper[4832]: I0312 14:47:25.850384 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:25 crc kubenswrapper[4832]: I0312 14:47:25.851371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:25 crc kubenswrapper[4832]: I0312 14:47:25.851407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:25 crc kubenswrapper[4832]: I0312 14:47:25.851418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:25 crc kubenswrapper[4832]: I0312 14:47:25.851440 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:25 crc kubenswrapper[4832]: E0312 14:47:25.851900 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Mar 12 14:47:25 crc kubenswrapper[4832]: W0312 14:47:25.975314 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:25 crc kubenswrapper[4832]: E0312 14:47:25.975400 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:26 crc kubenswrapper[4832]: W0312 14:47:26.218691 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:26 crc kubenswrapper[4832]: E0312 14:47:26.218796 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:26 crc kubenswrapper[4832]: I0312 14:47:26.534215 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:26 crc kubenswrapper[4832]: W0312 14:47:26.677573 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:26 crc kubenswrapper[4832]: E0312 14:47:26.677658 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:26 crc kubenswrapper[4832]: E0312 14:47:26.684803 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c1f5e27b6fc19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,LastTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.534175 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.640813 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5" exitCode=0 Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.640934 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5"} Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.640948 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.642258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.642299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.642320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.644089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74"} Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.644149 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d"} Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.644367 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.646377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.646436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.646459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.647365 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43" exitCode=0 Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.647399 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43"} Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.650873 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.652694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.652735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.652753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.653682 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01" exitCode=0 Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.653788 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01"} Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.653827 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.655180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.655236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.655257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.655974 4832 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029" exitCode=0 Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.656022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029"} Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.656068 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.656973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.657017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:27 crc kubenswrapper[4832]: I0312 14:47:27.657029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.535029 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.663647 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8" exitCode=0 Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.663785 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.663838 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.665272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.665306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.665317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.667746 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.667993 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.669648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.669721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.669741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.672405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.672463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.678115 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.678158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.681257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.681296 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450"} Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.681391 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.682436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.682475 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.682489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:28 crc kubenswrapper[4832]: E0312 14:47:28.762824 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="6.4s" Mar 12 14:47:28 crc kubenswrapper[4832]: I0312 14:47:28.840721 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:47:28 crc kubenswrapper[4832]: E0312 14:47:28.842296 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:28 crc kubenswrapper[4832]: W0312 14:47:28.984316 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:28 crc kubenswrapper[4832]: E0312 14:47:28.984404 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.052316 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.058761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.058814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.058825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.058872 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:29 crc kubenswrapper[4832]: E0312 14:47:29.059359 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Mar 12 14:47:29 crc kubenswrapper[4832]: W0312 14:47:29.308292 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:29 crc kubenswrapper[4832]: E0312 14:47:29.308374 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.534833 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.685199 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d"} Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.685337 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.686483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.686543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.686555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.688708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"20679c17b421aaf7cbcc3322c7811abf67cd2da64b457d66d777e2bde93f1450"} Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.688771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c"} Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.688794 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3"} Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.688926 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.690072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.690124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.690144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.691739 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9" exitCode=0 Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.691773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9"} Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.691821 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.691875 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.691879 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.692713 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.692761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.692778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.692964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.692997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.693009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.693061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.693083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.693125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:29 crc kubenswrapper[4832]: I0312 14:47:29.716429 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.001815 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.701339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7"} Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.701407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d"} Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.701435 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe"} Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.701453 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896"} Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.703757 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.707098 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="20679c17b421aaf7cbcc3322c7811abf67cd2da64b457d66d777e2bde93f1450" exitCode=255 Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.707262 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.707659 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.707764 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"20679c17b421aaf7cbcc3322c7811abf67cd2da64b457d66d777e2bde93f1450"} Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.708225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.708255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.708265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.708365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.708383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.708393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:30 crc kubenswrapper[4832]: I0312 14:47:30.708823 4832 scope.go:117] "RemoveContainer" containerID="20679c17b421aaf7cbcc3322c7811abf67cd2da64b457d66d777e2bde93f1450" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.292640 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.292875 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.294064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.294125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.294142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.585188 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.688657 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.712686 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.715230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6"} Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.715388 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.715491 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.716926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.716980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.716996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.720926 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71"} Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.720972 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.720997 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.721076 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:31 crc kubenswrapper[4832]: I0312 14:47:31.722419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.723775 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.723800 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.723939 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.724837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.724889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.724914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.724974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.725020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:32 crc kubenswrapper[4832]: I0312 14:47:32.725042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:32 crc kubenswrapper[4832]: E0312 14:47:32.741946 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:47:33 crc kubenswrapper[4832]: I0312 14:47:33.726902 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:33 crc kubenswrapper[4832]: I0312 14:47:33.728193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:33 crc kubenswrapper[4832]: I0312 14:47:33.728247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:33 crc kubenswrapper[4832]: I0312 14:47:33.728267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.125253 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.125501 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.127886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.127973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.128002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.134919 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.292900 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.292995 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.730119 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.731740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.731794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:34 crc kubenswrapper[4832]: I0312 14:47:34.731812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:35 crc kubenswrapper[4832]: I0312 14:47:35.460196 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:35 crc kubenswrapper[4832]: I0312 14:47:35.462062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:35 crc kubenswrapper[4832]: I0312 14:47:35.462096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:35 crc kubenswrapper[4832]: I0312 14:47:35.462107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:35 crc kubenswrapper[4832]: I0312 14:47:35.462132 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.363680 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.363919 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.365304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.365362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.365379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.386074 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.735552 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.737026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.737093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:36 crc kubenswrapper[4832]: I0312 14:47:36.737134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:37 crc kubenswrapper[4832]: I0312 14:47:37.483117 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.660678 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.660939 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.662536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.662631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.662651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.667783 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.740808 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.742389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.742473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:38 crc kubenswrapper[4832]: I0312 14:47:38.742579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:39 crc kubenswrapper[4832]: I0312 14:47:39.944802 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z Mar 12 14:47:39 crc kubenswrapper[4832]: W0312 14:47:39.948471 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.948617 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.948967 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.949280 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 14:47:39 crc kubenswrapper[4832]: W0312 14:47:39.952492 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.952594 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:39 crc kubenswrapper[4832]: I0312 14:47:39.953131 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 14:47:39 crc kubenswrapper[4832]: I0312 14:47:39.953194 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 14:47:39 crc kubenswrapper[4832]: W0312 14:47:39.953593 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.953666 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.956209 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c1f5e27b6fc19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,LastTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.957900 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 14:47:39 crc kubenswrapper[4832]: I0312 14:47:39.958122 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 14:47:39 crc kubenswrapper[4832]: I0312 14:47:39.958174 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 14:47:39 crc kubenswrapper[4832]: W0312 14:47:39.959291 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z Mar 12 14:47:39 crc kubenswrapper[4832]: E0312 14:47:39.959395 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.539594 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:40Z is after 2026-02-23T05:33:13Z Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.750305 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.751793 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.754398 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6" exitCode=255 Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.754462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6"} Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.754616 4832 scope.go:117] "RemoveContainer" containerID="20679c17b421aaf7cbcc3322c7811abf67cd2da64b457d66d777e2bde93f1450" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.754805 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.756603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.756635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.756651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:40 crc kubenswrapper[4832]: I0312 14:47:40.757580 4832 scope.go:117] "RemoveContainer" containerID="05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6" Mar 12 14:47:40 crc kubenswrapper[4832]: E0312 14:47:40.757918 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.539435 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:41Z is after 2026-02-23T05:33:13Z Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.595001 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.759422 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.762073 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.763442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.763531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.763546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.764147 4832 scope.go:117] "RemoveContainer" containerID="05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6" Mar 12 14:47:41 crc kubenswrapper[4832]: E0312 14:47:41.764347 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:47:41 crc kubenswrapper[4832]: I0312 14:47:41.768612 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:42 crc kubenswrapper[4832]: I0312 14:47:42.539841 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:42Z is after 2026-02-23T05:33:13Z Mar 12 14:47:42 crc kubenswrapper[4832]: E0312 14:47:42.742371 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:47:42 crc kubenswrapper[4832]: I0312 14:47:42.764990 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:42 crc kubenswrapper[4832]: I0312 14:47:42.766334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:42 crc kubenswrapper[4832]: I0312 14:47:42.766437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:42 crc kubenswrapper[4832]: I0312 14:47:42.766455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:42 crc kubenswrapper[4832]: I0312 14:47:42.767410 4832 scope.go:117] "RemoveContainer" containerID="05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6" Mar 12 14:47:42 crc kubenswrapper[4832]: E0312 14:47:42.767733 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:47:43 crc kubenswrapper[4832]: I0312 14:47:43.355393 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:47:43 crc kubenswrapper[4832]: I0312 14:47:43.539200 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:43Z is after 2026-02-23T05:33:13Z Mar 12 14:47:43 crc kubenswrapper[4832]: I0312 14:47:43.767976 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:43 crc kubenswrapper[4832]: I0312 14:47:43.769292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:43 crc kubenswrapper[4832]: I0312 14:47:43.769362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:43 crc kubenswrapper[4832]: I0312 14:47:43.769387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:43 crc kubenswrapper[4832]: I0312 14:47:43.770253 4832 scope.go:117] "RemoveContainer" containerID="05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6" Mar 12 14:47:43 crc kubenswrapper[4832]: E0312 14:47:43.770576 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:47:44 crc kubenswrapper[4832]: I0312 14:47:44.294217 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:47:44 crc kubenswrapper[4832]: I0312 14:47:44.294650 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:47:44 crc kubenswrapper[4832]: I0312 14:47:44.540243 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:44Z is after 2026-02-23T05:33:13Z Mar 12 14:47:45 crc kubenswrapper[4832]: I0312 14:47:45.537553 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:45Z is after 2026-02-23T05:33:13Z Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.399602 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.399774 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.400877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.401097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.401267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.417023 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.538316 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:46Z is after 2026-02-23T05:33:13Z Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.775181 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.776395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.776473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.776490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:46 crc kubenswrapper[4832]: E0312 14:47:46.956018 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:46Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.957980 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.959351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.959428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.959452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:46 crc kubenswrapper[4832]: I0312 14:47:46.959533 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:46 crc kubenswrapper[4832]: E0312 14:47:46.965644 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:46Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 14:47:47 crc kubenswrapper[4832]: I0312 14:47:47.538100 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:47Z is after 2026-02-23T05:33:13Z Mar 12 14:47:48 crc kubenswrapper[4832]: I0312 14:47:48.538964 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:48Z is after 2026-02-23T05:33:13Z Mar 12 14:47:49 crc kubenswrapper[4832]: I0312 14:47:49.540203 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:49Z is after 2026-02-23T05:33:13Z Mar 12 14:47:49 crc kubenswrapper[4832]: E0312 14:47:49.961368 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:49Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c1f5e27b6fc19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,LastTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:47:50 crc kubenswrapper[4832]: I0312 14:47:50.538918 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:50Z is after 2026-02-23T05:33:13Z Mar 12 14:47:50 crc kubenswrapper[4832]: W0312 14:47:50.742878 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:50Z is after 2026-02-23T05:33:13Z Mar 12 14:47:50 crc kubenswrapper[4832]: E0312 14:47:50.742968 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:51 crc kubenswrapper[4832]: W0312 14:47:51.239826 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:51Z is after 2026-02-23T05:33:13Z Mar 12 14:47:51 crc kubenswrapper[4832]: E0312 14:47:51.239934 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:51 crc kubenswrapper[4832]: I0312 14:47:51.540251 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:51Z is after 2026-02-23T05:33:13Z Mar 12 14:47:52 crc kubenswrapper[4832]: I0312 14:47:52.539641 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:52Z is after 2026-02-23T05:33:13Z Mar 12 14:47:52 crc kubenswrapper[4832]: E0312 14:47:52.742657 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:47:53 crc kubenswrapper[4832]: I0312 14:47:53.537680 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z Mar 12 14:47:53 crc kubenswrapper[4832]: E0312 14:47:53.960705 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 14:47:53 crc kubenswrapper[4832]: I0312 14:47:53.965937 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:53 crc kubenswrapper[4832]: I0312 14:47:53.966992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:53 crc kubenswrapper[4832]: I0312 14:47:53.967049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:53 crc kubenswrapper[4832]: I0312 14:47:53.967059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:53 crc kubenswrapper[4832]: I0312 14:47:53.967088 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:47:53 crc kubenswrapper[4832]: E0312 14:47:53.969981 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.293377 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.293483 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.293605 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.293787 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.294933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.294987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.294999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.295559 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.295769 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74" gracePeriod=30 Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.539033 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:54Z is after 2026-02-23T05:33:13Z Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.800068 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.800556 4832 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74" exitCode=255 Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.800597 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74"} Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.800647 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce"} Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.800775 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.801748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.801779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:54 crc kubenswrapper[4832]: I0312 14:47:54.801789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:55 crc kubenswrapper[4832]: I0312 14:47:55.538907 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:55Z is after 2026-02-23T05:33:13Z Mar 12 14:47:55 crc kubenswrapper[4832]: I0312 14:47:55.619334 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:55 crc kubenswrapper[4832]: I0312 14:47:55.620472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:55 crc kubenswrapper[4832]: I0312 14:47:55.620514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:55 crc kubenswrapper[4832]: I0312 14:47:55.620524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:55 crc kubenswrapper[4832]: I0312 14:47:55.621019 4832 scope.go:117] "RemoveContainer" containerID="05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6" Mar 12 14:47:55 crc kubenswrapper[4832]: I0312 14:47:55.805671 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.450399 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:47:56 crc kubenswrapper[4832]: E0312 14:47:56.453871 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:47:56 crc kubenswrapper[4832]: E0312 14:47:56.455092 4832 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.537605 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:56Z is after 2026-02-23T05:33:13Z Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.812027 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.812922 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.815181 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86" exitCode=255 Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.815241 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86"} Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.815290 4832 scope.go:117] "RemoveContainer" containerID="05caed7b07177592d9e008580402969eeb4b2beb0d48b761fdd1f701bf454bd6" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.815533 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.821109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.821160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.821177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:56 crc kubenswrapper[4832]: I0312 14:47:56.822771 4832 scope.go:117] "RemoveContainer" containerID="ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86" Mar 12 14:47:56 crc kubenswrapper[4832]: E0312 14:47:56.823431 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:47:57 crc kubenswrapper[4832]: I0312 14:47:57.538615 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:57Z is after 2026-02-23T05:33:13Z Mar 12 14:47:57 crc kubenswrapper[4832]: I0312 14:47:57.820482 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 14:47:58 crc kubenswrapper[4832]: I0312 14:47:58.540880 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:58Z is after 2026-02-23T05:33:13Z Mar 12 14:47:58 crc kubenswrapper[4832]: I0312 14:47:58.660791 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:47:58 crc kubenswrapper[4832]: I0312 14:47:58.660992 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:47:58 crc kubenswrapper[4832]: I0312 14:47:58.662403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:47:58 crc kubenswrapper[4832]: I0312 14:47:58.662658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:47:58 crc kubenswrapper[4832]: I0312 14:47:58.662852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:47:59 crc kubenswrapper[4832]: I0312 14:47:59.539343 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:59Z is after 2026-02-23T05:33:13Z Mar 12 14:47:59 crc kubenswrapper[4832]: E0312 14:47:59.967063 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:59Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c1f5e27b6fc19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,LastTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:00 crc kubenswrapper[4832]: W0312 14:48:00.405818 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:00Z is after 2026-02-23T05:33:13Z Mar 12 14:48:00 crc kubenswrapper[4832]: E0312 14:48:00.405905 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 14:48:00 crc kubenswrapper[4832]: I0312 14:48:00.536838 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:00Z is after 2026-02-23T05:33:13Z Mar 12 14:48:00 crc kubenswrapper[4832]: E0312 14:48:00.963445 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:00Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 14:48:00 crc kubenswrapper[4832]: I0312 14:48:00.970728 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:00 crc kubenswrapper[4832]: I0312 14:48:00.971953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:00 crc kubenswrapper[4832]: I0312 14:48:00.971996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:00 crc kubenswrapper[4832]: I0312 14:48:00.972007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:00 crc kubenswrapper[4832]: I0312 14:48:00.972032 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:48:00 crc kubenswrapper[4832]: E0312 14:48:00.974395 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:00Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 14:48:01 crc kubenswrapper[4832]: I0312 14:48:01.293801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:48:01 crc kubenswrapper[4832]: I0312 14:48:01.294015 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:01 crc kubenswrapper[4832]: I0312 14:48:01.295457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:01 crc kubenswrapper[4832]: I0312 14:48:01.295535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:01 crc kubenswrapper[4832]: I0312 14:48:01.295548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:01 crc kubenswrapper[4832]: I0312 14:48:01.536351 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:01Z is after 2026-02-23T05:33:13Z Mar 12 14:48:02 crc kubenswrapper[4832]: I0312 14:48:02.538410 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:02 crc kubenswrapper[4832]: E0312 14:48:02.743126 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.355649 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.355818 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.356940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.357062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.357102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.357901 4832 scope.go:117] "RemoveContainer" containerID="ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86" Mar 12 14:48:03 crc kubenswrapper[4832]: E0312 14:48:03.358126 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.537162 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.539192 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.840300 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.841087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.841127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.841138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:03 crc kubenswrapper[4832]: I0312 14:48:03.841613 4832 scope.go:117] "RemoveContainer" containerID="ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86" Mar 12 14:48:03 crc kubenswrapper[4832]: E0312 14:48:03.841781 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:04 crc kubenswrapper[4832]: I0312 14:48:04.294016 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:48:04 crc kubenswrapper[4832]: I0312 14:48:04.294109 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:48:04 crc kubenswrapper[4832]: I0312 14:48:04.538769 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:04 crc kubenswrapper[4832]: W0312 14:48:04.601991 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 14:48:04 crc kubenswrapper[4832]: E0312 14:48:04.602045 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 14:48:05 crc kubenswrapper[4832]: I0312 14:48:05.540326 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:06 crc kubenswrapper[4832]: I0312 14:48:06.541978 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:07 crc kubenswrapper[4832]: I0312 14:48:07.542666 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:07 crc kubenswrapper[4832]: E0312 14:48:07.965875 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:48:07 crc kubenswrapper[4832]: I0312 14:48:07.974847 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:07 crc kubenswrapper[4832]: I0312 14:48:07.977032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:07 crc kubenswrapper[4832]: I0312 14:48:07.977084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:07 crc kubenswrapper[4832]: I0312 14:48:07.977097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:07 crc kubenswrapper[4832]: I0312 14:48:07.977132 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:48:07 crc kubenswrapper[4832]: E0312 14:48:07.983296 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 14:48:08 crc kubenswrapper[4832]: I0312 14:48:08.539218 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:09 crc kubenswrapper[4832]: I0312 14:48:09.539498 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:09 crc kubenswrapper[4832]: E0312 14:48:09.974650 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e27b6fc19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,LastTimestamp:2026-03-12 14:47:22.530823193 +0000 UTC m=+1.174837459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:09 crc kubenswrapper[4832]: E0312 14:48:09.979826 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:09 crc kubenswrapper[4832]: E0312 14:48:09.985874 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:09 crc kubenswrapper[4832]: E0312 14:48:09.994787 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b7df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,LastTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.000569 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e3401839d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.737034141 +0000 UTC m=+1.381048377,LastTimestamp:2026-03-12 14:47:22.737034141 +0000 UTC m=+1.381048377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.006748 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b1fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.839193791 +0000 UTC m=+1.483208037,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.013013 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b5849\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.839215748 +0000 UTC m=+1.483229994,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.018920 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b7df9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b7df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,LastTimestamp:2026-03-12 14:47:22.839239025 +0000 UTC m=+1.483253261,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.023729 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b1fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.926161414 +0000 UTC m=+1.570175660,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.028760 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b5849\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.926273208 +0000 UTC m=+1.570287444,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.033687 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b7df9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b7df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,LastTimestamp:2026-03-12 14:47:22.926372255 +0000 UTC m=+1.570386491,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.038947 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b1fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.928059861 +0000 UTC m=+1.572074097,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.043585 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b5849\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.928076539 +0000 UTC m=+1.572090785,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.048812 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b7df9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b7df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,LastTimestamp:2026-03-12 14:47:22.928088927 +0000 UTC m=+1.572103173,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.054105 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b1fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.928772272 +0000 UTC m=+1.572786508,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.059287 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b5849\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.928797149 +0000 UTC m=+1.572811385,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.064048 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b7df9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b7df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,LastTimestamp:2026-03-12 14:47:22.928811177 +0000 UTC m=+1.572825413,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.069276 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b1fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.92972938 +0000 UTC m=+1.573743616,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.074517 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b5849\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.929748907 +0000 UTC m=+1.573763143,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.079040 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b7df9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b7df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,LastTimestamp:2026-03-12 14:47:22.929758516 +0000 UTC m=+1.573772752,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.083128 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b1fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.930486325 +0000 UTC m=+1.574500571,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.086853 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b5849\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.930656052 +0000 UTC m=+1.574670288,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.092235 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b7df9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b7df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592984569 +0000 UTC m=+1.236998795,LastTimestamp:2026-03-12 14:47:22.931006353 +0000 UTC m=+1.575020589,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.097601 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b1fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b1fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592960483 +0000 UTC m=+1.236974709,LastTimestamp:2026-03-12 14:47:22.931037429 +0000 UTC m=+1.575051675,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.099897 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1f5e2b6b5849\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1f5e2b6b5849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:22.592974921 +0000 UTC m=+1.236989147,LastTimestamp:2026-03-12 14:47:22.931065235 +0000 UTC m=+1.575079471,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.105421 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5e607eb872 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:23.48343717 +0000 UTC m=+2.127451406,LastTimestamp:2026-03-12 14:47:23.48343717 +0000 UTC m=+2.127451406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.112910 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1f5e60802b37 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:23.483532087 +0000 UTC m=+2.127546333,LastTimestamp:2026-03-12 14:47:23.483532087 +0000 UTC m=+2.127546333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.114344 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5e6080a383 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:23.483562883 +0000 UTC m=+2.127577109,LastTimestamp:2026-03-12 14:47:23.483562883 +0000 UTC m=+2.127577109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.117774 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5e60825c59 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:23.483675737 +0000 UTC m=+2.127689973,LastTimestamp:2026-03-12 14:47:23.483675737 +0000 UTC m=+2.127689973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.124447 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5e60872650 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:23.483989584 +0000 UTC m=+2.128003820,LastTimestamp:2026-03-12 14:47:23.483989584 +0000 UTC m=+2.128003820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.129023 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1f5f1f72cbbc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.687103932 +0000 UTC m=+5.331118168,LastTimestamp:2026-03-12 14:47:26.687103932 +0000 UTC m=+5.331118168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.132396 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f1f73fa17 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.687181335 +0000 UTC m=+5.331195561,LastTimestamp:2026-03-12 14:47:26.687181335 +0000 UTC m=+5.331195561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.136700 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f1f783144 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.687457604 +0000 UTC m=+5.331471830,LastTimestamp:2026-03-12 14:47:26.687457604 +0000 UTC m=+5.331471830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.141422 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5f1f7b0636 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.68764319 +0000 UTC m=+5.331657426,LastTimestamp:2026-03-12 14:47:26.68764319 +0000 UTC m=+5.331657426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.145715 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f1f7d481f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.687791135 +0000 UTC m=+5.331805361,LastTimestamp:2026-03-12 14:47:26.687791135 +0000 UTC m=+5.331805361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.149534 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5f225e74b2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.736102578 +0000 UTC m=+5.380116844,LastTimestamp:2026-03-12 14:47:26.736102578 +0000 UTC m=+5.380116844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.153550 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f225ff029 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.736199721 +0000 UTC m=+5.380213987,LastTimestamp:2026-03-12 14:47:26.736199721 +0000 UTC m=+5.380213987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.159717 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1f5f2313ffd2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.74800021 +0000 UTC m=+5.392014476,LastTimestamp:2026-03-12 14:47:26.74800021 +0000 UTC m=+5.392014476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.167012 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f2317c4b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.748247218 +0000 UTC m=+5.392261484,LastTimestamp:2026-03-12 14:47:26.748247218 +0000 UTC m=+5.392261484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.171996 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f231a9ec2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.748434114 +0000 UTC m=+5.392448380,LastTimestamp:2026-03-12 14:47:26.748434114 +0000 UTC m=+5.392448380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.177491 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f2333b674 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.75007858 +0000 UTC m=+5.394092846,LastTimestamp:2026-03-12 14:47:26.75007858 +0000 UTC m=+5.394092846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.183143 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f4089d37b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.242261371 +0000 UTC m=+5.886275607,LastTimestamp:2026-03-12 14:47:27.242261371 +0000 UTC m=+5.886275607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.187787 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f4574689d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.324743837 +0000 UTC m=+5.968758093,LastTimestamp:2026-03-12 14:47:27.324743837 +0000 UTC m=+5.968758093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.193451 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f458b6ddc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.326252508 +0000 UTC m=+5.970266724,LastTimestamp:2026-03-12 14:47:27.326252508 +0000 UTC m=+5.970266724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.199231 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f587cb408 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.644054536 +0000 UTC m=+6.288068802,LastTimestamp:2026-03-12 14:47:27.644054536 +0000 UTC m=+6.288068802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.205771 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f58efed8d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.651605901 +0000 UTC m=+6.295620137,LastTimestamp:2026-03-12 14:47:27.651605901 +0000 UTC m=+6.295620137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.211627 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5f5921249c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.65483126 +0000 UTC m=+6.298845496,LastTimestamp:2026-03-12 14:47:27.65483126 +0000 UTC m=+6.298845496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.218065 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f5974496e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.660280174 +0000 UTC m=+6.304294400,LastTimestamp:2026-03-12 14:47:27.660280174 +0000 UTC m=+6.304294400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.222744 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1f5f59786508 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.660549384 +0000 UTC m=+6.304563650,LastTimestamp:2026-03-12 14:47:27.660549384 +0000 UTC m=+6.304563650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.227885 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f61d93f87 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.801114503 +0000 UTC m=+6.445128769,LastTimestamp:2026-03-12 14:47:27.801114503 +0000 UTC m=+6.445128769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.233128 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f6200fb51 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.803718481 +0000 UTC m=+6.447732707,LastTimestamp:2026-03-12 14:47:27.803718481 +0000 UTC m=+6.447732707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.236779 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f71672682 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.06207245 +0000 UTC m=+6.706086716,LastTimestamp:2026-03-12 14:47:28.06207245 +0000 UTC m=+6.706086716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.241388 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5f74c7f7f7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.118749175 +0000 UTC m=+6.762763401,LastTimestamp:2026-03-12 14:47:28.118749175 +0000 UTC m=+6.762763401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.245280 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1f5f74cc1502 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.119018754 +0000 UTC m=+6.763033010,LastTimestamp:2026-03-12 14:47:28.119018754 +0000 UTC m=+6.763033010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.249082 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f74d1ca63 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.119392867 +0000 UTC m=+6.763407103,LastTimestamp:2026-03-12 14:47:28.119392867 +0000 UTC m=+6.763407103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.252732 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f74dcc205 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.120111621 +0000 UTC m=+6.764125897,LastTimestamp:2026-03-12 14:47:28.120111621 +0000 UTC m=+6.764125897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.256303 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f7526e790 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.124970896 +0000 UTC m=+6.768985132,LastTimestamp:2026-03-12 14:47:28.124970896 +0000 UTC m=+6.768985132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.260619 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f753dfef9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.126484217 +0000 UTC m=+6.770498483,LastTimestamp:2026-03-12 14:47:28.126484217 +0000 UTC m=+6.770498483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.264566 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f79042fbc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.189804476 +0000 UTC m=+6.833818742,LastTimestamp:2026-03-12 14:47:28.189804476 +0000 UTC m=+6.833818742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.269008 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f799d91c0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.199856576 +0000 UTC m=+6.843870842,LastTimestamp:2026-03-12 14:47:28.199856576 +0000 UTC m=+6.843870842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.275869 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1f5f79ab8cc7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.200772807 +0000 UTC m=+6.844787063,LastTimestamp:2026-03-12 14:47:28.200772807 +0000 UTC m=+6.844787063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.280221 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f79b06161 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.201089377 +0000 UTC m=+6.845103643,LastTimestamp:2026-03-12 14:47:28.201089377 +0000 UTC m=+6.845103643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.286692 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5f7a23a1b3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.208642483 +0000 UTC m=+6.852656709,LastTimestamp:2026-03-12 14:47:28.208642483 +0000 UTC m=+6.852656709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.293735 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f8a59e835 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.480634933 +0000 UTC m=+7.124649169,LastTimestamp:2026-03-12 14:47:28.480634933 +0000 UTC m=+7.124649169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.300676 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f8f0adda2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.559340962 +0000 UTC m=+7.203355218,LastTimestamp:2026-03-12 14:47:28.559340962 +0000 UTC m=+7.203355218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.306086 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f8f3579bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.562133437 +0000 UTC m=+7.206147703,LastTimestamp:2026-03-12 14:47:28.562133437 +0000 UTC m=+7.206147703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.315615 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f8f4ebeb4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.563789492 +0000 UTC m=+7.207803748,LastTimestamp:2026-03-12 14:47:28.563789492 +0000 UTC m=+7.207803748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.321089 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f90226d17 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.577662231 +0000 UTC m=+7.221676497,LastTimestamp:2026-03-12 14:47:28.577662231 +0000 UTC m=+7.221676497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.325674 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f903e6659 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.579495513 +0000 UTC m=+7.223509779,LastTimestamp:2026-03-12 14:47:28.579495513 +0000 UTC m=+7.223509779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.330654 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5f9575894c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.66699502 +0000 UTC m=+7.311009256,LastTimestamp:2026-03-12 14:47:28.66699502 +0000 UTC m=+7.311009256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.338044 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f9d704879 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.800868473 +0000 UTC m=+7.444882689,LastTimestamp:2026-03-12 14:47:28.800868473 +0000 UTC m=+7.444882689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.342837 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f9d71ba86 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.800963206 +0000 UTC m=+7.444977432,LastTimestamp:2026-03-12 14:47:28.800963206 +0000 UTC m=+7.444977432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.349314 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f9e4b5a04 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.815225348 +0000 UTC m=+7.459239574,LastTimestamp:2026-03-12 14:47:28.815225348 +0000 UTC m=+7.459239574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.354011 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1f5f9e570ca6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.815991974 +0000 UTC m=+7.460006200,LastTimestamp:2026-03-12 14:47:28.815991974 +0000 UTC m=+7.460006200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.360000 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5f9e66913e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.817008958 +0000 UTC m=+7.461023184,LastTimestamp:2026-03-12 14:47:28.817008958 +0000 UTC m=+7.461023184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.366706 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fa11e0b0a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.862587658 +0000 UTC m=+7.506601894,LastTimestamp:2026-03-12 14:47:28.862587658 +0000 UTC m=+7.506601894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.372740 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fa20ae0f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:28.878108913 +0000 UTC m=+7.522123139,LastTimestamp:2026-03-12 14:47:28.878108913 +0000 UTC m=+7.522123139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.377225 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5faaf5a2b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.027711668 +0000 UTC m=+7.671725894,LastTimestamp:2026-03-12 14:47:29.027711668 +0000 UTC m=+7.671725894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.382382 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5fabe4831f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.043366687 +0000 UTC m=+7.687380913,LastTimestamp:2026-03-12 14:47:29.043366687 +0000 UTC m=+7.687380913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.388078 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5fabf3f907 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.044379911 +0000 UTC m=+7.688394147,LastTimestamp:2026-03-12 14:47:29.044379911 +0000 UTC m=+7.688394147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.393234 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5fb70c891f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.230539039 +0000 UTC m=+7.874553305,LastTimestamp:2026-03-12 14:47:29.230539039 +0000 UTC m=+7.874553305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.397393 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5fb801d9eb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.246616043 +0000 UTC m=+7.890630279,LastTimestamp:2026-03-12 14:47:29.246616043 +0000 UTC m=+7.890630279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.402648 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fd2ae0b25 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.694108453 +0000 UTC m=+8.338122679,LastTimestamp:2026-03-12 14:47:29.694108453 +0000 UTC m=+8.338122679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.406227 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fdd46c2cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.8718891 +0000 UTC m=+8.515903326,LastTimestamp:2026-03-12 14:47:29.8718891 +0000 UTC m=+8.515903326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.410183 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fddae7c28 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.87868676 +0000 UTC m=+8.522700986,LastTimestamp:2026-03-12 14:47:29.87868676 +0000 UTC m=+8.522700986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.416159 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fddbf0896 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.879771286 +0000 UTC m=+8.523785512,LastTimestamp:2026-03-12 14:47:29.879771286 +0000 UTC m=+8.523785512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.421151 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fe8bcddaf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.064178607 +0000 UTC m=+8.708192873,LastTimestamp:2026-03-12 14:47:30.064178607 +0000 UTC m=+8.708192873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.427272 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fe9646d82 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.075159938 +0000 UTC m=+8.719174174,LastTimestamp:2026-03-12 14:47:30.075159938 +0000 UTC m=+8.719174174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.434780 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5fe973c830 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.076166192 +0000 UTC m=+8.720180438,LastTimestamp:2026-03-12 14:47:30.076166192 +0000 UTC m=+8.720180438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.441633 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5ff5265dca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.272419274 +0000 UTC m=+8.916433500,LastTimestamp:2026-03-12 14:47:30.272419274 +0000 UTC m=+8.916433500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.447079 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5ff5dc6113 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.284347667 +0000 UTC m=+8.928361893,LastTimestamp:2026-03-12 14:47:30.284347667 +0000 UTC m=+8.928361893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.453690 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f5ff5ed6b04 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.285464324 +0000 UTC m=+8.929478550,LastTimestamp:2026-03-12 14:47:30.285464324 +0000 UTC m=+8.929478550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.460683 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f60018ee71f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.480596767 +0000 UTC m=+9.124611003,LastTimestamp:2026-03-12 14:47:30.480596767 +0000 UTC m=+9.124611003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.468184 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f60026b6522 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.495046946 +0000 UTC m=+9.139061212,LastTimestamp:2026-03-12 14:47:30.495046946 +0000 UTC m=+9.139061212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.474987 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f60027e110f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.496270607 +0000 UTC m=+9.140284853,LastTimestamp:2026-03-12 14:47:30.496270607 +0000 UTC m=+9.140284853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.482166 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f600e4809ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.69405643 +0000 UTC m=+9.338070676,LastTimestamp:2026-03-12 14:47:30.69405643 +0000 UTC m=+9.338070676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.489994 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1f600f195d67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:30.707774823 +0000 UTC m=+9.351789049,LastTimestamp:2026-03-12 14:47:30.707774823 +0000 UTC m=+9.351789049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.498718 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1f5fabf3f907\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5fabf3f907 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.044379911 +0000 UTC m=+7.688394147,LastTimestamp:2026-03-12 14:47:30.709669597 +0000 UTC m=+9.353683833,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.505171 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1f5fb70c891f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5fb70c891f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.230539039 +0000 UTC m=+7.874553305,LastTimestamp:2026-03-12 14:47:30.888722917 +0000 UTC m=+9.532737143,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.512575 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1f5fb801d9eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f5fb801d9eb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:29.246616043 +0000 UTC m=+7.890630279,LastTimestamp:2026-03-12 14:47:30.900275378 +0000 UTC m=+9.544289614,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.522462 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 14:48:10 crc kubenswrapper[4832]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1f60e4cb21b9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 12 14:48:10 crc kubenswrapper[4832]: body: Mar 12 14:48:10 crc kubenswrapper[4832]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:34.292971961 +0000 UTC m=+12.936986217,LastTimestamp:2026-03-12 14:47:34.292971961 +0000 UTC m=+12.936986217,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 14:48:10 crc kubenswrapper[4832]: > Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.528682 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f60e4cbfa69 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:34.293027433 +0000 UTC m=+12.937041669,LastTimestamp:2026-03-12 14:47:34.293027433 +0000 UTC m=+12.937041669,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: I0312 14:48:10.538557 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.539631 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 14:48:10 crc kubenswrapper[4832]: &Event{ObjectMeta:{kube-apiserver-crc.189c1f62362affb5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 14:48:10 crc kubenswrapper[4832]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 14:48:10 crc kubenswrapper[4832]: Mar 12 14:48:10 crc kubenswrapper[4832]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:39.953176501 +0000 UTC m=+18.597190767,LastTimestamp:2026-03-12 14:47:39.953176501 +0000 UTC m=+18.597190767,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 14:48:10 crc kubenswrapper[4832]: > Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.546035 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f62362bb583 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:39.953223043 +0000 UTC m=+18.597237309,LastTimestamp:2026-03-12 14:47:39.953223043 +0000 UTC m=+18.597237309,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.553385 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1f62362affb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 14:48:10 crc kubenswrapper[4832]: &Event{ObjectMeta:{kube-apiserver-crc.189c1f62362affb5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 14:48:10 crc kubenswrapper[4832]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 14:48:10 crc kubenswrapper[4832]: Mar 12 14:48:10 crc kubenswrapper[4832]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:39.953176501 +0000 UTC m=+18.597190767,LastTimestamp:2026-03-12 14:47:39.95815814 +0000 UTC m=+18.602172396,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 14:48:10 crc kubenswrapper[4832]: > Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.560362 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1f62362bb583\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1f62362bb583 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:39.953223043 +0000 UTC m=+18.597237309,LastTimestamp:2026-03-12 14:47:39.958201661 +0000 UTC m=+18.602215917,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.568437 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 14:48:10 crc kubenswrapper[4832]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1f6338f00fa6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 14:48:10 crc kubenswrapper[4832]: body: Mar 12 14:48:10 crc kubenswrapper[4832]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:44.294612902 +0000 UTC m=+22.938627218,LastTimestamp:2026-03-12 14:47:44.294612902 +0000 UTC m=+22.938627218,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 14:48:10 crc kubenswrapper[4832]: > Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.573462 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f6338f46465 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:44.294896741 +0000 UTC m=+22.938911037,LastTimestamp:2026-03-12 14:47:44.294896741 +0000 UTC m=+22.938911037,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.581015 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f6338f00fa6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 14:48:10 crc kubenswrapper[4832]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1f6338f00fa6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 14:48:10 crc kubenswrapper[4832]: body: Mar 12 14:48:10 crc kubenswrapper[4832]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:44.294612902 +0000 UTC m=+22.938627218,LastTimestamp:2026-03-12 14:47:54.293458166 +0000 UTC m=+32.937472402,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 14:48:10 crc kubenswrapper[4832]: > Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.586841 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f6338f46465\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f6338f46465 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:44.294896741 +0000 UTC m=+22.938911037,LastTimestamp:2026-03-12 14:47:54.293547549 +0000 UTC m=+32.937561795,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.594429 4832 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f658d0d4d42 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:54.295749954 +0000 UTC m=+32.939764190,LastTimestamp:2026-03-12 14:47:54.295749954 +0000 UTC m=+32.939764190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.601758 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f5f2333b674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f2333b674 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:26.75007858 +0000 UTC m=+5.394092846,LastTimestamp:2026-03-12 14:47:54.418960897 +0000 UTC m=+33.062975123,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.613895 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f5f4089d37b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f4089d37b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.242261371 +0000 UTC m=+5.886275607,LastTimestamp:2026-03-12 14:47:54.602904212 +0000 UTC m=+33.246918478,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.620318 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f5f4574689d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f5f4574689d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:27.324743837 +0000 UTC m=+5.968758093,LastTimestamp:2026-03-12 14:47:54.614901787 +0000 UTC m=+33.258916053,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.628498 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f6338f00fa6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 14:48:10 crc kubenswrapper[4832]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1f6338f00fa6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 14:48:10 crc kubenswrapper[4832]: body: Mar 12 14:48:10 crc kubenswrapper[4832]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:44.294612902 +0000 UTC m=+22.938627218,LastTimestamp:2026-03-12 14:48:04.294081851 +0000 UTC m=+42.938096117,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 14:48:10 crc kubenswrapper[4832]: > Mar 12 14:48:10 crc kubenswrapper[4832]: E0312 14:48:10.630684 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f6338f46465\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1f6338f46465 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:44.294896741 +0000 UTC m=+22.938911037,LastTimestamp:2026-03-12 14:48:04.294151593 +0000 UTC m=+42.938165859,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:48:11 crc kubenswrapper[4832]: I0312 14:48:11.539906 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:12 crc kubenswrapper[4832]: W0312 14:48:12.334303 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 14:48:12 crc kubenswrapper[4832]: E0312 14:48:12.334385 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 14:48:12 crc kubenswrapper[4832]: I0312 14:48:12.541307 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:12 crc kubenswrapper[4832]: E0312 14:48:12.743669 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:48:13 crc kubenswrapper[4832]: I0312 14:48:13.542166 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:13 crc kubenswrapper[4832]: W0312 14:48:13.744973 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:13 crc kubenswrapper[4832]: E0312 14:48:13.745036 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.293603 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.293989 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:48:14 crc kubenswrapper[4832]: E0312 14:48:14.300965 4832 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1f6338f00fa6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 14:48:14 crc kubenswrapper[4832]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1f6338f00fa6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 14:48:14 crc kubenswrapper[4832]: body: Mar 12 14:48:14 crc kubenswrapper[4832]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:47:44.294612902 +0000 UTC m=+22.938627218,LastTimestamp:2026-03-12 14:48:14.293951427 +0000 UTC m=+52.937965693,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 14:48:14 crc kubenswrapper[4832]: > Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.541833 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:14 crc kubenswrapper[4832]: E0312 14:48:14.973587 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.983651 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.985086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.985125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.985138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:14 crc kubenswrapper[4832]: I0312 14:48:14.985173 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:48:14 crc kubenswrapper[4832]: E0312 14:48:14.990095 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 14:48:15 crc kubenswrapper[4832]: I0312 14:48:15.543826 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:15 crc kubenswrapper[4832]: I0312 14:48:15.619064 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:15 crc kubenswrapper[4832]: I0312 14:48:15.620780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:15 crc kubenswrapper[4832]: I0312 14:48:15.620849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:15 crc kubenswrapper[4832]: I0312 14:48:15.620872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:15 crc kubenswrapper[4832]: I0312 14:48:15.621869 4832 scope.go:117] "RemoveContainer" containerID="ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86" Mar 12 14:48:15 crc kubenswrapper[4832]: E0312 14:48:15.622210 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:16 crc kubenswrapper[4832]: I0312 14:48:16.540745 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:17 crc kubenswrapper[4832]: I0312 14:48:17.541648 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:18 crc kubenswrapper[4832]: I0312 14:48:18.541030 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:19 crc kubenswrapper[4832]: I0312 14:48:19.539476 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:20 crc kubenswrapper[4832]: I0312 14:48:20.009273 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 14:48:20 crc kubenswrapper[4832]: I0312 14:48:20.009425 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:20 crc kubenswrapper[4832]: I0312 14:48:20.010641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:20 crc kubenswrapper[4832]: I0312 14:48:20.010673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:20 crc kubenswrapper[4832]: I0312 14:48:20.010684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:20 crc kubenswrapper[4832]: I0312 14:48:20.538667 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:21 crc kubenswrapper[4832]: I0312 14:48:21.541345 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:21 crc kubenswrapper[4832]: E0312 14:48:21.979719 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:48:21 crc kubenswrapper[4832]: I0312 14:48:21.990379 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:21 crc kubenswrapper[4832]: I0312 14:48:21.991877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:21 crc kubenswrapper[4832]: I0312 14:48:21.991993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:21 crc kubenswrapper[4832]: I0312 14:48:21.992071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:21 crc kubenswrapper[4832]: I0312 14:48:21.992168 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:48:22 crc kubenswrapper[4832]: E0312 14:48:22.002367 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.041880 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.042190 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.043695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.043734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.043743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.048466 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.538570 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:22 crc kubenswrapper[4832]: E0312 14:48:22.744582 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.895554 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.897198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.897278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:22 crc kubenswrapper[4832]: I0312 14:48:22.897307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:23 crc kubenswrapper[4832]: I0312 14:48:23.537731 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:24 crc kubenswrapper[4832]: I0312 14:48:24.539539 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:25 crc kubenswrapper[4832]: I0312 14:48:25.540434 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:26 crc kubenswrapper[4832]: I0312 14:48:26.537871 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:27 crc kubenswrapper[4832]: I0312 14:48:27.538425 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.456803 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.469372 4832 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.538888 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.619764 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.620893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.620925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.620936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.621473 4832 scope.go:117] "RemoveContainer" containerID="ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.912676 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.915046 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11"} Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.915231 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.916022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.916050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:28 crc kubenswrapper[4832]: I0312 14:48:28.916059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:28 crc kubenswrapper[4832]: E0312 14:48:28.983977 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.003141 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.006401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.006460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.006499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.006924 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:48:29 crc kubenswrapper[4832]: E0312 14:48:29.015268 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.538200 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.918758 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.919165 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.920907 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" exitCode=255 Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.920945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11"} Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.920998 4832 scope.go:117] "RemoveContainer" containerID="ad9b0685a22a18da72ccb0aaf5fce6ed9afeec0643c8fabd98eaf77124f72c86" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.921088 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.921839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.921860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.921868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:29 crc kubenswrapper[4832]: I0312 14:48:29.922263 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:48:29 crc kubenswrapper[4832]: E0312 14:48:29.922406 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:30 crc kubenswrapper[4832]: I0312 14:48:30.538983 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:30 crc kubenswrapper[4832]: I0312 14:48:30.924589 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 14:48:31 crc kubenswrapper[4832]: I0312 14:48:31.539054 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:32 crc kubenswrapper[4832]: I0312 14:48:32.537918 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:32 crc kubenswrapper[4832]: E0312 14:48:32.745469 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.355669 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.356350 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.358131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.358197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.358216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.359166 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:48:33 crc kubenswrapper[4832]: E0312 14:48:33.359441 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.534968 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.537165 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.933496 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.934514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.934549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.934558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:33 crc kubenswrapper[4832]: I0312 14:48:33.935071 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:48:33 crc kubenswrapper[4832]: E0312 14:48:33.935248 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:34 crc kubenswrapper[4832]: I0312 14:48:34.540069 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:35 crc kubenswrapper[4832]: I0312 14:48:35.542260 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:35 crc kubenswrapper[4832]: E0312 14:48:35.992049 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:48:36 crc kubenswrapper[4832]: I0312 14:48:36.015943 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:36 crc kubenswrapper[4832]: I0312 14:48:36.017360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:36 crc kubenswrapper[4832]: I0312 14:48:36.017403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:36 crc kubenswrapper[4832]: I0312 14:48:36.017415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:36 crc kubenswrapper[4832]: I0312 14:48:36.017438 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:48:36 crc kubenswrapper[4832]: E0312 14:48:36.024884 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 14:48:36 crc kubenswrapper[4832]: I0312 14:48:36.538841 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:48:37 crc kubenswrapper[4832]: I0312 14:48:37.481785 4832 csr.go:261] certificate signing request csr-qs5h6 is approved, waiting to be issued Mar 12 14:48:37 crc kubenswrapper[4832]: I0312 14:48:37.497942 4832 csr.go:257] certificate signing request csr-qs5h6 is issued Mar 12 14:48:37 crc kubenswrapper[4832]: I0312 14:48:37.567064 4832 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.031214 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.132316 4832 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 14:48:38 crc kubenswrapper[4832]: W0312 14:48:38.132604 4832 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.499855 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-09 05:09:02.049321575 +0000 UTC Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.500687 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6518h20m23.548660449s for next certificate rotation Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.538683 4832 apiserver.go:52] "Watching apiserver" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.549121 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.549670 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.550433 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.550656 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.550962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.551081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.550965 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.551332 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.551454 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.551812 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.551990 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.552929 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.553454 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.553888 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.554616 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.554961 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.555263 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.555565 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.555954 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.556245 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.582867 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.598178 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.610475 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.621767 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.632953 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.640252 4832 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.644675 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.656568 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.666955 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674310 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674380 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674408 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674544 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674572 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674599 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.674842 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:48:39.174796689 +0000 UTC m=+77.818810955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674909 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.674972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675024 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675058 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675092 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675126 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675210 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675252 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675292 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675337 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675378 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675416 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675455 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675496 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675663 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675728 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675776 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675837 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675885 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675907 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675942 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675935 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675945 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675971 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675975 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.675951 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676105 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676203 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676263 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676321 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676423 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676472 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676560 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676695 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676747 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676796 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676843 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676891 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677038 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677089 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677140 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677221 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677275 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677332 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677382 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677429 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677468 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677540 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677595 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677680 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677729 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677820 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677876 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677923 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677975 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678034 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678130 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678179 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678233 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678289 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678390 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678444 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678590 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678644 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678748 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678797 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678844 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678896 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678939 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678973 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679023 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679080 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679131 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679188 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679239 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679303 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679363 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679468 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679558 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679674 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679728 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679783 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679833 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679889 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679940 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680057 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680110 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680216 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680274 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680329 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680385 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680438 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680492 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680594 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680651 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680705 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680754 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680812 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680868 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680929 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681032 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681085 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681234 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681295 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681349 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681386 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681457 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681494 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681588 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681626 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681660 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681694 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681730 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681780 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681906 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681999 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682115 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682244 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682344 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682394 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682538 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682707 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682763 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682817 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682870 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682985 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683103 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683219 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683272 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683331 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683384 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683437 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683491 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683691 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683744 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683800 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683854 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683913 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683967 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684026 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684082 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684135 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684186 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684240 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684342 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684398 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684558 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684679 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684738 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684855 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684913 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684967 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685023 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685077 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685273 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685352 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685419 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685769 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685856 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685922 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686267 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686302 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686337 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686373 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686406 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686439 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676247 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676242 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676314 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676328 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676343 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676669 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.676929 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677059 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677160 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677978 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.677986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678226 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678236 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678352 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678618 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.678941 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679189 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.687034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.687050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679303 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679626 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679848 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679869 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679968 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.679872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.687670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.688304 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.688339 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680246 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680435 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.680921 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.681593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682131 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682348 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682422 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682617 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682864 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682957 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.682954 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683208 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683720 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683851 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.688656 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.683999 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684368 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.684698 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685278 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.685387 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686076 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.686694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.689058 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.689076 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.689100 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.689168 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.689579 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.689637 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.689893 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.690593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.690614 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.690931 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.691230 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.691381 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.691388 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.691852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.691914 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.692000 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.692552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.692817 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.693132 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.693255 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.693670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.694007 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.694116 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.694687 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.694844 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.694930 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695018 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695106 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695319 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695607 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695894 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695926 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.695945 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.697280 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.697365 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.697431 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.697814 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.697893 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.697661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.697956 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:39.197939664 +0000 UTC m=+77.841953900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.698018 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.697980 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.698044 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.698041 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.698278 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.698540 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.698629 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:39.198603301 +0000 UTC m=+77.842617547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.699023 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.699215 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.699387 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.699490 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.700142 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.700164 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.700289 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.700492 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.700757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.701198 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.701414 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.701476 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.701562 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.702943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.703099 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.703578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.703635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.703697 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.703908 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.703972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.703989 4832 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.704877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.704892 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.705332 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.705664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.705708 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.705878 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.705899 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.706037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.706265 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.711071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.715466 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.715537 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.715569 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.715583 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.715663 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:39.215641852 +0000 UTC m=+77.859656158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.715615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.720940 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.722141 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.724261 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.724649 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.726559 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.726596 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.726616 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.726682 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:39.226657151 +0000 UTC m=+77.870671447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.726882 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727079 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727157 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727267 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727573 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727651 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.727698 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.728075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.728063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734376 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734383 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734599 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734679 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734686 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734767 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734832 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734860 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734928 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734942 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.734989 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.735319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.735600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.735964 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.736083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.736093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.736185 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.736249 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.736600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.736657 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.736691 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.737638 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.737856 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.737954 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.738196 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.738306 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.738494 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.739049 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.738560 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.738618 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.738646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.739386 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.739629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.740321 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.740670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.741204 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.743143 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.744065 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.748639 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.750796 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787287 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787318 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787320 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787344 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787389 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787407 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787473 4832 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787539 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787569 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787587 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787608 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787628 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787648 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787666 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787719 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787741 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787760 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787778 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787796 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787817 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787835 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787853 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787871 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787888 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787906 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787923 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787941 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787962 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787981 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.787998 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788016 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788034 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788051 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788069 4832 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788087 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788107 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788125 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788145 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788163 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788182 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788201 4832 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788220 4832 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788240 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788259 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788278 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788295 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788315 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788332 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788349 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788366 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788384 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788401 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788420 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788439 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788457 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788476 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788494 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788552 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788573 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788592 4832 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788611 4832 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788630 4832 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788647 4832 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788666 4832 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788683 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788701 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788718 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788736 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788754 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788771 4832 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788788 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788806 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788824 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788843 4832 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788862 4832 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788882 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788899 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788917 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788935 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788953 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788971 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.788990 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789008 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789025 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789043 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789060 4832 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789077 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789095 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789112 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789129 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789147 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789165 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789185 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789202 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789220 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789238 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789255 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789273 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789292 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789310 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789328 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789346 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789363 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789394 4832 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789413 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789431 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789449 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789466 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789485 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789536 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789562 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789586 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789610 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789635 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789652 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789669 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789687 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789704 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789721 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789740 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789757 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789776 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789833 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789852 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789869 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789887 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789904 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789922 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789942 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789959 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789976 4832 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.789993 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790011 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790029 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790046 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790065 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790083 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790100 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790118 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790135 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790153 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790171 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790195 4832 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790213 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790232 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790249 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790266 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790285 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790303 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790319 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790337 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790354 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790371 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790389 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790407 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790424 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790441 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790459 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790478 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790495 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790550 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790571 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790588 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790606 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790623 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790640 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790658 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790675 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790692 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790710 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790727 4832 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790747 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790766 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790784 4832 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790804 4832 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790823 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790840 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790858 4832 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790875 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790894 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790911 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790929 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790946 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790964 4832 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.790982 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.791000 4832 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.791017 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.791036 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.791053 4832 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.791071 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.875217 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.888894 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 14:48:38 crc kubenswrapper[4832]: W0312 14:48:38.891342 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-37237332f3d0c1ff712d3910caa96be778ad76b5a98ccddc2a43e1a7737ed17d WatchSource:0}: Error finding container 37237332f3d0c1ff712d3910caa96be778ad76b5a98ccddc2a43e1a7737ed17d: Status 404 returned error can't find the container with id 37237332f3d0c1ff712d3910caa96be778ad76b5a98ccddc2a43e1a7737ed17d Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.896069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.902590 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 14:48:38 crc kubenswrapper[4832]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 12 14:48:38 crc kubenswrapper[4832]: set -o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 12 14:48:38 crc kubenswrapper[4832]: source /etc/kubernetes/apiserver-url.env Mar 12 14:48:38 crc kubenswrapper[4832]: else Mar 12 14:48:38 crc kubenswrapper[4832]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 12 14:48:38 crc kubenswrapper[4832]: exit 1 Mar 12 14:48:38 crc kubenswrapper[4832]: fi Mar 12 14:48:38 crc kubenswrapper[4832]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 12 14:48:38 crc kubenswrapper[4832]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 14:48:38 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.905481 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 12 14:48:38 crc kubenswrapper[4832]: W0312 14:48:38.911551 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b0d4c3c707c02a77391c53ea5bec40356752774533d319edce2ff7701a20a6d6 WatchSource:0}: Error finding container b0d4c3c707c02a77391c53ea5bec40356752774533d319edce2ff7701a20a6d6: Status 404 returned error can't find the container with id b0d4c3c707c02a77391c53ea5bec40356752774533d319edce2ff7701a20a6d6 Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.916708 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.918633 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.927027 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 14:48:38 crc kubenswrapper[4832]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 14:48:38 crc kubenswrapper[4832]: if [[ -f "/env/_master" ]]; then Mar 12 14:48:38 crc kubenswrapper[4832]: set -o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: source "/env/_master" Mar 12 14:48:38 crc kubenswrapper[4832]: set +o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: fi Mar 12 14:48:38 crc kubenswrapper[4832]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 12 14:48:38 crc kubenswrapper[4832]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 12 14:48:38 crc kubenswrapper[4832]: ho_enable="--enable-hybrid-overlay" Mar 12 14:48:38 crc kubenswrapper[4832]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 12 14:48:38 crc kubenswrapper[4832]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 12 14:48:38 crc kubenswrapper[4832]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 12 14:48:38 crc kubenswrapper[4832]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 14:48:38 crc kubenswrapper[4832]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --webhook-host=127.0.0.1 \ Mar 12 14:48:38 crc kubenswrapper[4832]: --webhook-port=9743 \ Mar 12 14:48:38 crc kubenswrapper[4832]: ${ho_enable} \ Mar 12 14:48:38 crc kubenswrapper[4832]: --enable-interconnect \ Mar 12 14:48:38 crc kubenswrapper[4832]: --disable-approver \ Mar 12 14:48:38 crc kubenswrapper[4832]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --wait-for-kubernetes-api=200s \ Mar 12 14:48:38 crc kubenswrapper[4832]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --loglevel="${LOGLEVEL}" Mar 12 14:48:38 crc kubenswrapper[4832]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 14:48:38 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.929304 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 14:48:38 crc kubenswrapper[4832]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 14:48:38 crc kubenswrapper[4832]: if [[ -f "/env/_master" ]]; then Mar 12 14:48:38 crc kubenswrapper[4832]: set -o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: source "/env/_master" Mar 12 14:48:38 crc kubenswrapper[4832]: set +o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: fi Mar 12 14:48:38 crc kubenswrapper[4832]: Mar 12 14:48:38 crc kubenswrapper[4832]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 12 14:48:38 crc kubenswrapper[4832]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 14:48:38 crc kubenswrapper[4832]: --disable-webhook \ Mar 12 14:48:38 crc kubenswrapper[4832]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --loglevel="${LOGLEVEL}" Mar 12 14:48:38 crc kubenswrapper[4832]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 14:48:38 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.930471 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.947353 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b42b554d4e92e82dabec3f7452ea3c561ec925103b7c187274751001e7dff70c"} Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.948584 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 14:48:38 crc kubenswrapper[4832]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 14:48:38 crc kubenswrapper[4832]: if [[ -f "/env/_master" ]]; then Mar 12 14:48:38 crc kubenswrapper[4832]: set -o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: source "/env/_master" Mar 12 14:48:38 crc kubenswrapper[4832]: set +o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: fi Mar 12 14:48:38 crc kubenswrapper[4832]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 12 14:48:38 crc kubenswrapper[4832]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 12 14:48:38 crc kubenswrapper[4832]: ho_enable="--enable-hybrid-overlay" Mar 12 14:48:38 crc kubenswrapper[4832]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 12 14:48:38 crc kubenswrapper[4832]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 12 14:48:38 crc kubenswrapper[4832]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 12 14:48:38 crc kubenswrapper[4832]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 14:48:38 crc kubenswrapper[4832]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --webhook-host=127.0.0.1 \ Mar 12 14:48:38 crc kubenswrapper[4832]: --webhook-port=9743 \ Mar 12 14:48:38 crc kubenswrapper[4832]: ${ho_enable} \ Mar 12 14:48:38 crc kubenswrapper[4832]: --enable-interconnect \ Mar 12 14:48:38 crc kubenswrapper[4832]: --disable-approver \ Mar 12 14:48:38 crc kubenswrapper[4832]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --wait-for-kubernetes-api=200s \ Mar 12 14:48:38 crc kubenswrapper[4832]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --loglevel="${LOGLEVEL}" Mar 12 14:48:38 crc kubenswrapper[4832]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 14:48:38 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.950075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b0d4c3c707c02a77391c53ea5bec40356752774533d319edce2ff7701a20a6d6"} Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.950819 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 14:48:38 crc kubenswrapper[4832]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 14:48:38 crc kubenswrapper[4832]: if [[ -f "/env/_master" ]]; then Mar 12 14:48:38 crc kubenswrapper[4832]: set -o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: source "/env/_master" Mar 12 14:48:38 crc kubenswrapper[4832]: set +o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: fi Mar 12 14:48:38 crc kubenswrapper[4832]: Mar 12 14:48:38 crc kubenswrapper[4832]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 12 14:48:38 crc kubenswrapper[4832]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 14:48:38 crc kubenswrapper[4832]: --disable-webhook \ Mar 12 14:48:38 crc kubenswrapper[4832]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 12 14:48:38 crc kubenswrapper[4832]: --loglevel="${LOGLEVEL}" Mar 12 14:48:38 crc kubenswrapper[4832]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 14:48:38 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.951412 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"37237332f3d0c1ff712d3910caa96be778ad76b5a98ccddc2a43e1a7737ed17d"} Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.951410 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.951919 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.952861 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.953293 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 14:48:38 crc kubenswrapper[4832]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 12 14:48:38 crc kubenswrapper[4832]: set -o allexport Mar 12 14:48:38 crc kubenswrapper[4832]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 12 14:48:38 crc kubenswrapper[4832]: source /etc/kubernetes/apiserver-url.env Mar 12 14:48:38 crc kubenswrapper[4832]: else Mar 12 14:48:38 crc kubenswrapper[4832]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 12 14:48:38 crc kubenswrapper[4832]: exit 1 Mar 12 14:48:38 crc kubenswrapper[4832]: fi Mar 12 14:48:38 crc kubenswrapper[4832]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 12 14:48:38 crc kubenswrapper[4832]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 14:48:38 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 14:48:38 crc kubenswrapper[4832]: E0312 14:48:38.955327 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.961672 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.976488 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:38 crc kubenswrapper[4832]: I0312 14:48:38.990422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.027265 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.044623 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.056218 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.065417 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.073316 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.082271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.094476 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.102651 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.110674 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.194479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.194664 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:48:40.194630884 +0000 UTC m=+78.838645150 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.295578 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.295898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.295976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.295783 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.296071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296201 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:40.296165042 +0000 UTC m=+78.940179308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296046 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296262 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296291 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296069 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296375 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:40.296348576 +0000 UTC m=+78.940362842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296422 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:40.296402378 +0000 UTC m=+78.940416704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296538 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296618 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296674 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.296769 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:40.296755427 +0000 UTC m=+78.940769653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:39 crc kubenswrapper[4832]: I0312 14:48:39.618710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:39 crc kubenswrapper[4832]: E0312 14:48:39.619161 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.206241 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.206573 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:48:42.206488211 +0000 UTC m=+80.850502477 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.307281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.307349 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.307386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.307712 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307622 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307847 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:42.307824604 +0000 UTC m=+80.951838840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307850 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307886 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307910 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307624 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307982 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.308011 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.307650 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.308102 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:42.30806782 +0000 UTC m=+80.952082076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.308144 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:42.308125821 +0000 UTC m=+80.952140197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.308172 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:42.308157602 +0000 UTC m=+80.952171978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.618785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.619322 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.618799 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:40 crc kubenswrapper[4832]: E0312 14:48:40.620489 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.625169 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.627052 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.630475 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.632446 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.634790 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.636110 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.637474 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.639629 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.641079 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.643146 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.644486 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.646959 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.648772 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.650217 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.652794 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.654442 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.656948 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.658085 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.659724 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.662019 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.663071 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.665179 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.666305 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.667765 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.668388 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.669195 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.670172 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.670796 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.671533 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.672115 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.672718 4832 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.672846 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.674612 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.675237 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.675761 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.677172 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.678100 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.680878 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.682058 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.683651 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.684377 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.685839 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.686766 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.688250 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.689027 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.690421 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.691219 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.692818 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.693557 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.694912 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.695915 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.696718 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.698135 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 12 14:48:40 crc kubenswrapper[4832]: I0312 14:48:40.698888 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 12 14:48:41 crc kubenswrapper[4832]: I0312 14:48:41.618795 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:41 crc kubenswrapper[4832]: E0312 14:48:41.619217 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.225272 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.225427 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:48:46.225399572 +0000 UTC m=+84.869413838 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.326745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.326851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.326912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.326961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.326975 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327016 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327042 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327072 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327092 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327146 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:46.327115574 +0000 UTC m=+84.971129840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327179 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:46.327163146 +0000 UTC m=+84.971177412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327206 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:46.327192556 +0000 UTC m=+84.971206822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327274 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327301 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327324 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.327387 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:46.327363581 +0000 UTC m=+84.971377837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.619120 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.619208 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.619738 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:42 crc kubenswrapper[4832]: E0312 14:48:42.619838 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.633495 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.649698 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.661018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.677373 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.689404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:42 crc kubenswrapper[4832]: I0312 14:48:42.702250 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.025299 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.027179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.027331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.027538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.027789 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.038453 4832 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.038694 4832 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.039847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.039891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.039904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.039922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.039935 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: E0312 14:48:43.054472 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.058778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.059013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.059143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.059294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.059415 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: E0312 14:48:43.073450 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.078032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.078069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.078078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.078093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.078102 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: E0312 14:48:43.089262 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.092964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.093013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.093030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.093047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.093059 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: E0312 14:48:43.108639 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.113304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.113344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.113361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.113387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.113410 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: E0312 14:48:43.124731 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:43 crc kubenswrapper[4832]: E0312 14:48:43.125174 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.126933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.127054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.127130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.127226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.127305 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.232679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.233205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.233294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.233381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.233576 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.336601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.336667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.336682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.336701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.336714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.438930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.438981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.438991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.439006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.439016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.541801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.541856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.541872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.541895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.541911 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.618932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:43 crc kubenswrapper[4832]: E0312 14:48:43.619338 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.644292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.644367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.644391 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.644416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.644434 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.747209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.747278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.747302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.747333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.747356 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.849694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.849756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.849773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.849797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.849819 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.952599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.952641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.952651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.952680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:43 crc kubenswrapper[4832]: I0312 14:48:43.952689 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:43Z","lastTransitionTime":"2026-03-12T14:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.055480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.055567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.055583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.055604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.055618 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.158971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.159030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.159046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.159068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.159087 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.262076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.262128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.262147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.262172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.262194 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.365795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.365874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.365890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.365909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.365919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.468126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.468181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.468195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.468215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.468229 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.570611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.570659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.570670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.570686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.570697 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.619270 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.619270 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:44 crc kubenswrapper[4832]: E0312 14:48:44.619623 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:44 crc kubenswrapper[4832]: E0312 14:48:44.619839 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.676538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.676582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.676608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.676624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.676633 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.778826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.778860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.778871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.778885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.778896 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.880849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.880884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.880893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.880911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.880920 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.983164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.983195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.983209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.983231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:44 crc kubenswrapper[4832]: I0312 14:48:44.983242 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:44Z","lastTransitionTime":"2026-03-12T14:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.085724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.085754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.085761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.085776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.085784 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.190150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.190257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.190320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.190350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.190407 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.293556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.293612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.293631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.293656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.293712 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.396344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.396410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.396430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.396463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.396486 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.499168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.499233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.499255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.499286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.499310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.602271 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.602316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.602329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.602346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.602358 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.618621 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:45 crc kubenswrapper[4832]: E0312 14:48:45.618737 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.705967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.706094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.706165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.706243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.706273 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.809434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.809494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.809539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.809563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.809580 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.912073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.912115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.912126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.912140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:45 crc kubenswrapper[4832]: I0312 14:48:45.912149 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:45Z","lastTransitionTime":"2026-03-12T14:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.014543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.014617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.014638 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.014667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.014691 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.117865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.117925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.117944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.117967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.117984 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.221112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.221169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.221182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.221202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.221218 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.263868 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.264153 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:48:54.264105426 +0000 UTC m=+92.908119682 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.324676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.324738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.324749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.324769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.324786 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.364805 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.364890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.364928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.364977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365059 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365120 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365140 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365171 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365191 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365213 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365260 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365276 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365827 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:54.365181632 +0000 UTC m=+93.009195898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.365926 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:54.36590392 +0000 UTC m=+93.009918176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.366004 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:54.365946341 +0000 UTC m=+93.009960607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.366036 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:48:54.366022393 +0000 UTC m=+93.010036649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.427822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.427884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.427893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.427913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.427926 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.532321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.532417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.532442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.532471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.532490 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.619319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.619391 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.619648 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.619837 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.636797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.636863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.636880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.636904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.636922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.639632 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.639924 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.640317 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.740717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.740796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.740812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.740839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.740861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.844001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.844084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.844097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.844117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.844128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.946679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.947013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.947164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.947292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.947416 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:46Z","lastTransitionTime":"2026-03-12T14:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:46 crc kubenswrapper[4832]: I0312 14:48:46.971472 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:48:46 crc kubenswrapper[4832]: E0312 14:48:46.971699 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.050549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.051111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.051221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.051317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.051468 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.154943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.155078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.155111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.155141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.155164 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.258723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.259001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.259098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.259192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.259282 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.261817 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.363347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.363430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.363457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.363489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.363553 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.466377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.466437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.466452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.466476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.466493 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.569144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.569187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.569198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.569214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.569227 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.619589 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:47 crc kubenswrapper[4832]: E0312 14:48:47.619975 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.671550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.671592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.671603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.671622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.671633 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.773914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.773951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.773963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.773980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.773990 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.876851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.876917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.876933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.876955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.876977 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.979807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.979858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.979867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.979879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:47 crc kubenswrapper[4832]: I0312 14:48:47.979889 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:47Z","lastTransitionTime":"2026-03-12T14:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.082172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.082212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.082223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.082235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.082243 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.186311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.186374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.186385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.186399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.186409 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.289303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.289376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.289396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.289419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.289436 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.392537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.392583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.392606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.392634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.392651 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.495500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.495559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.495574 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.495589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.495597 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.598122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.598185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.598202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.598227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.598244 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.619739 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.619778 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:48 crc kubenswrapper[4832]: E0312 14:48:48.619980 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:48 crc kubenswrapper[4832]: E0312 14:48:48.620052 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.701456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.701540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.701560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.701584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.701601 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.804167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.804237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.804256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.804283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.804300 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.907479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.907602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.907626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.907656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:48 crc kubenswrapper[4832]: I0312 14:48:48.907679 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:48Z","lastTransitionTime":"2026-03-12T14:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.011319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.011379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.011402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.011430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.011454 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.114395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.114460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.114482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.114536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.114555 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.216952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.217040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.217066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.217097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.217122 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.319277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.319316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.319328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.319344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.319354 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.421450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.421494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.421523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.421542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.421555 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.524472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.524513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.524524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.524541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.524552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.619492 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:49 crc kubenswrapper[4832]: E0312 14:48:49.619722 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.627331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.627394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.627417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.627446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.627468 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.730114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.730168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.730184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.730206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.730224 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.833440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.833486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.833520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.833539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.833551 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.935568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.935602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.935612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.935628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:49 crc kubenswrapper[4832]: I0312 14:48:49.935640 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:49Z","lastTransitionTime":"2026-03-12T14:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.038945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.039000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.039012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.039030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.039043 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.141222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.141286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.141303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.141328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.141351 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.245598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.245648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.245659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.245684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.245696 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.348331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.348369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.348382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.348398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.348410 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.450624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.450663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.450676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.450690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.450700 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.553666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.553715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.553730 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.553754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.553771 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.619387 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:50 crc kubenswrapper[4832]: E0312 14:48:50.619689 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.619786 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:50 crc kubenswrapper[4832]: E0312 14:48:50.620302 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.657262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.657636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.657652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.657672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.657686 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.760472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.760536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.760549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.760568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.760581 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.863838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.863911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.863932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.863959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.863980 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.966818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.966885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.966902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.966928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:50 crc kubenswrapper[4832]: I0312 14:48:50.966945 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:50Z","lastTransitionTime":"2026-03-12T14:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.069761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.069800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.069810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.069825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.069836 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.172487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.172559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.172571 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.172589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.172601 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.274956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.275005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.275021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.275048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.275066 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.377814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.377861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.377877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.377936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.377954 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.480652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.481062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.481456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.481834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.481888 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.584607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.584653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.584663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.584680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.584692 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.619622 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:51 crc kubenswrapper[4832]: E0312 14:48:51.619937 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.687714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.687753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.687762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.687775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.687785 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.789894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.789979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.789987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.790002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.790012 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.892932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.892986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.893001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.893020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.893035 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.995194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.995239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.995247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.995260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:51 crc kubenswrapper[4832]: I0312 14:48:51.995268 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:51Z","lastTransitionTime":"2026-03-12T14:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.097971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.098010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.098020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.098037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.098048 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.200547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.200596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.200610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.200631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.200646 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.303648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.303691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.303702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.303720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.303730 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.405857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.405911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.405928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.405989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.406012 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.508436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.508476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.508486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.508518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.508536 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.611869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.611932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.611943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.611960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.611973 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.619375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.619596 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:52 crc kubenswrapper[4832]: E0312 14:48:52.619687 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:52 crc kubenswrapper[4832]: E0312 14:48:52.620532 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.631298 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.645016 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.655097 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.667433 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.677914 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.687231 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.697736 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.714964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.715013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.715025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.715041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.715052 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.816437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.816471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.816481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.816496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.816543 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.921087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.921170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.921185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.921210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.921245 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:52Z","lastTransitionTime":"2026-03-12T14:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.987011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.987056 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.989895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88"} Mar 12 14:48:52 crc kubenswrapper[4832]: I0312 14:48:52.996819 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.004397 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.012339 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.020642 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.023318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.023353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.023362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.023377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.023387 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.030660 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.038405 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.050591 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.059624 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.068605 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.080367 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.092017 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.109536 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.122888 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.127356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.127399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.127408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.127423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.127434 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.135615 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.230705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.230752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.230765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.230782 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.230794 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.236842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.236899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.236921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.236949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.236968 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: E0312 14:48:53.252699 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.258768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.259106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.259189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.259256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.259320 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: E0312 14:48:53.280206 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:53Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.285802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.285935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.285954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.285981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.285998 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: E0312 14:48:53.306645 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:53Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.311731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.311773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.311785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.311800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.311810 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: E0312 14:48:53.326257 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:53Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.331158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.331201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.331216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.331235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.331249 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: E0312 14:48:53.349754 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:53Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:53 crc kubenswrapper[4832]: E0312 14:48:53.349982 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.353613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.353688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.353714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.353744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.353767 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.456396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.456463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.456684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.456703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.456727 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.559792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.559840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.559855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.559873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.559884 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.619147 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:53 crc kubenswrapper[4832]: E0312 14:48:53.619381 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.662350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.662408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.662429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.662455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.662473 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.764904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.764962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.764980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.765010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.765040 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.867796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.867848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.867861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.867879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.867896 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.971130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.971175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.971191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.971217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.971234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:53Z","lastTransitionTime":"2026-03-12T14:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:53 crc kubenswrapper[4832]: I0312 14:48:53.993700 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.011932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.028427 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.049194 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.073706 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.074589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.074647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.074665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.074689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.074706 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.093617 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.107319 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.126537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.178183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.178237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.178255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.178285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.178303 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.280962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.281002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.281012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.281025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.281033 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.350492 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.350751 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:49:10.350732387 +0000 UTC m=+108.994746613 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.383492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.383553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.383567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.383586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.383596 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.451732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.451793 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.451837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.451871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.451945 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.451984 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452000 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452013 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452050 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452082 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452098 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.451946 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452059 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:10.452037389 +0000 UTC m=+109.096051685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452289 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:10.452254734 +0000 UTC m=+109.096268970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452316 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:10.452303215 +0000 UTC m=+109.096317571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.452348 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:10.452338606 +0000 UTC m=+109.096352852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.489445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.489533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.489548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.489566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.489579 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.591658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.591698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.591706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.591722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.591733 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.619474 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.619494 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.620179 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:54 crc kubenswrapper[4832]: E0312 14:48:54.619964 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.628647 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.693962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.694006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.694016 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.694034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.694044 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.796547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.796602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.796613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.796648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.796660 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.898932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.899024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.899055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.899086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:54 crc kubenswrapper[4832]: I0312 14:48:54.899110 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:54Z","lastTransitionTime":"2026-03-12T14:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.001001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.001062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.001085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.001107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.001124 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.103409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.103462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.103473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.103487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.103496 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.206860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.206913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.206926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.206942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.206958 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.309740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.309809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.309832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.309859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.309876 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.413295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.413361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.413383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.413414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.413434 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.516664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.516717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.516735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.516759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.516776 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.576649 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-c2phv"] Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.577125 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p4tdb"] Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.577438 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.578066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.580207 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.580382 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.580379 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.580385 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.580208 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.581046 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.581293 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.585713 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.599949 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.613433 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.618616 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:55 crc kubenswrapper[4832]: E0312 14:48:55.618702 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.619058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.619114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.619129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.619148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.619161 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.628704 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.637700 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.647619 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.658412 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-netns\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660212 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1-hosts-file\") pod \"node-resolver-p4tdb\" (UID: \"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\") " pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-multus-certs\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-os-release\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-k8s-cni-cncf-io\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-conf-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-daemon-config\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhp86\" (UniqueName: \"kubernetes.io/projected/ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1-kube-api-access-qhp86\") pod \"node-resolver-p4tdb\" (UID: \"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\") " pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660371 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-cnibin\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660389 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-socket-dir-parent\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660407 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-kubelet\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-hostroot\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660450 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-system-cni-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-cni-bin\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660489 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-cni-multus\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660528 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-etc-kubernetes\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-cni-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzrf\" (UniqueName: \"kubernetes.io/projected/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-kube-api-access-mmzrf\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.660610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-cni-binary-copy\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.669349 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.677359 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.688156 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.698342 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.715288 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.722387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.722452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.722470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.722535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.722554 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.732371 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.744622 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.757832 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-cni-binary-copy\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-netns\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1-hosts-file\") pod \"node-resolver-p4tdb\" (UID: \"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\") " pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-multus-certs\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761334 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-os-release\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761348 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-k8s-cni-cncf-io\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-conf-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-cnibin\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761389 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-socket-dir-parent\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-kubelet\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-hostroot\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-daemon-config\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhp86\" (UniqueName: \"kubernetes.io/projected/ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1-kube-api-access-qhp86\") pod \"node-resolver-p4tdb\" (UID: \"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\") " pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-system-cni-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-cni-bin\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761520 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-cni-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761534 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-cni-multus\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-etc-kubernetes\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761563 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmzrf\" (UniqueName: \"kubernetes.io/projected/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-kube-api-access-mmzrf\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-netns\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1-hosts-file\") pod \"node-resolver-p4tdb\" (UID: \"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\") " pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761881 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-multus-certs\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-os-release\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761940 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-run-k8s-cni-cncf-io\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-conf-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.761985 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-cnibin\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762019 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-socket-dir-parent\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-kubelet\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-hostroot\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-cni-binary-copy\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-cni-bin\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-daemon-config\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762780 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-multus-cni-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762822 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-host-var-lib-cni-multus\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762857 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-etc-kubernetes\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.762921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-system-cni-dir\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.767480 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.776295 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.778822 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhp86\" (UniqueName: \"kubernetes.io/projected/ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1-kube-api-access-qhp86\") pod \"node-resolver-p4tdb\" (UID: \"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\") " pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.779330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmzrf\" (UniqueName: \"kubernetes.io/projected/7c82e050-0168-4210-bb2d-7d8bbbc5e74e-kube-api-access-mmzrf\") pod \"multus-c2phv\" (UID: \"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\") " pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.789488 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.799523 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.809337 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.824901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.824936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.824948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.824964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.824977 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.890935 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p4tdb" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.896969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c2phv" Mar 12 14:48:55 crc kubenswrapper[4832]: W0312 14:48:55.911152 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c82e050_0168_4210_bb2d_7d8bbbc5e74e.slice/crio-058d1a64344fa202fa020c7363afefd957c6396c8f8392334e73a719e3d60c4b WatchSource:0}: Error finding container 058d1a64344fa202fa020c7363afefd957c6396c8f8392334e73a719e3d60c4b: Status 404 returned error can't find the container with id 058d1a64344fa202fa020c7363afefd957c6396c8f8392334e73a719e3d60c4b Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.929699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.929973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.929983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.929995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.930004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:55Z","lastTransitionTime":"2026-03-12T14:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.948528 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-72zcf"] Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.954579 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kdl9v"] Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.954974 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.957206 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.956740 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5zjpx"] Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.960782 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.960833 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.960898 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.961239 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.962035 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.962290 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.962753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965206 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8c62aa7e-9fce-4677-b6bc-beb87644af0a-rootfs\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9f5p\" (UniqueName: \"kubernetes.io/projected/8c62aa7e-9fce-4677-b6bc-beb87644af0a-kube-api-access-z9f5p\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-system-cni-dir\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-cnibin\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965340 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-os-release\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965360 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfqm\" (UniqueName: \"kubernetes.io/projected/7d5f70b0-6d75-4511-8423-e826258274d1-kube-api-access-4nfqm\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965390 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c62aa7e-9fce-4677-b6bc-beb87644af0a-proxy-tls\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965412 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c62aa7e-9fce-4677-b6bc-beb87644af0a-mcd-auth-proxy-config\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965434 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d5f70b0-6d75-4511-8423-e826258274d1-cni-binary-copy\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965454 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d5f70b0-6d75-4511-8423-e826258274d1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965517 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965721 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965876 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.965959 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.967097 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.967457 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.967575 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.967620 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.977427 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:55 crc kubenswrapper[4832]: I0312 14:48:55.991591 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:55Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.001703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p4tdb" event={"ID":"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1","Type":"ContainerStarted","Data":"0c8776b86b8e67b2017d0aa2da4af8d8d7f74bf983d42e27264e9201c0a80f9e"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.005192 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerStarted","Data":"058d1a64344fa202fa020c7363afefd957c6396c8f8392334e73a719e3d60c4b"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.005682 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.020473 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.034005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.034035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.034044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.034059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.034068 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.034615 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.049692 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.061470 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068773 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c62aa7e-9fce-4677-b6bc-beb87644af0a-proxy-tls\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068813 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c62aa7e-9fce-4677-b6bc-beb87644af0a-mcd-auth-proxy-config\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068862 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-netd\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068881 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d5f70b0-6d75-4511-8423-e826258274d1-cni-binary-copy\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068901 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-ovn-kubernetes\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-var-lib-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d5f70b0-6d75-4511-8423-e826258274d1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-slash\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.068996 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-kubelet\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069008 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069022 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-config\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069054 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-bin\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-script-lib\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-env-overrides\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069110 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqcx\" (UniqueName: \"kubernetes.io/projected/18cc235e-1890-485d-8ca2-bf03b2006ab9-kube-api-access-knqcx\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069126 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8c62aa7e-9fce-4677-b6bc-beb87644af0a-rootfs\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069142 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9f5p\" (UniqueName: \"kubernetes.io/projected/8c62aa7e-9fce-4677-b6bc-beb87644af0a-kube-api-access-z9f5p\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-system-cni-dir\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069172 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-netns\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-ovn\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069208 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovn-node-metrics-cert\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-cnibin\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069244 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-etc-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069260 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-systemd\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-node-log\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-log-socket\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069302 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-os-release\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069317 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfqm\" (UniqueName: \"kubernetes.io/projected/7d5f70b0-6d75-4511-8423-e826258274d1-kube-api-access-4nfqm\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069333 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-systemd-units\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069429 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-cnibin\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069511 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c62aa7e-9fce-4677-b6bc-beb87644af0a-mcd-auth-proxy-config\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069521 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-os-release\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069593 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-system-cni-dir\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f70b0-6d75-4511-8423-e826258274d1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069717 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8c62aa7e-9fce-4677-b6bc-beb87644af0a-rootfs\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.069963 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d5f70b0-6d75-4511-8423-e826258274d1-cni-binary-copy\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.070082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d5f70b0-6d75-4511-8423-e826258274d1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.071717 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c62aa7e-9fce-4677-b6bc-beb87644af0a-proxy-tls\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.072583 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.080885 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.083952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9f5p\" (UniqueName: \"kubernetes.io/projected/8c62aa7e-9fce-4677-b6bc-beb87644af0a-kube-api-access-z9f5p\") pod \"machine-config-daemon-kdl9v\" (UID: \"8c62aa7e-9fce-4677-b6bc-beb87644af0a\") " pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.084482 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfqm\" (UniqueName: \"kubernetes.io/projected/7d5f70b0-6d75-4511-8423-e826258274d1-kube-api-access-4nfqm\") pod \"multus-additional-cni-plugins-72zcf\" (UID: \"7d5f70b0-6d75-4511-8423-e826258274d1\") " pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.092073 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.103004 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.113653 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.129448 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.136064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.136093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.136102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.136117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.136128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.139218 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.151845 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.160422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-env-overrides\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169760 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knqcx\" (UniqueName: \"kubernetes.io/projected/18cc235e-1890-485d-8ca2-bf03b2006ab9-kube-api-access-knqcx\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169777 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-netns\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169793 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-ovn\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-etc-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovn-node-metrics-cert\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169856 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-systemd\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169874 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-node-log\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-log-socket\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169906 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-systemd-units\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-netd\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-var-lib-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169975 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-ovn-kubernetes\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.169990 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-slash\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.170007 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-kubelet\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.170034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.170049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-config\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.170063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-bin\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.170082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-script-lib\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.170783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-script-lib\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-env-overrides\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171405 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-netns\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171433 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-ovn\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-etc-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-var-lib-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-systemd\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171961 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-node-log\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-kubelet\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.171989 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-log-socket\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172026 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-openvswitch\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-netd\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172033 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-bin\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172046 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-ovn-kubernetes\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172077 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-slash\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-systemd-units\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.172460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-config\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.174672 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovn-node-metrics-cert\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.181887 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.186978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqcx\" (UniqueName: \"kubernetes.io/projected/18cc235e-1890-485d-8ca2-bf03b2006ab9-kube-api-access-knqcx\") pod \"ovnkube-node-5zjpx\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.198389 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.210869 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.221669 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.234590 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.238089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.238119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.238130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.238161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.238186 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.247338 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.258983 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:56Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.305869 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-72zcf" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.313050 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.318438 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:48:56 crc kubenswrapper[4832]: W0312 14:48:56.329975 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c62aa7e_9fce_4677_b6bc_beb87644af0a.slice/crio-0b5d4007815acb9629efccb4ce6f327ef9d13410d8b7a95e423f4faf624b5b18 WatchSource:0}: Error finding container 0b5d4007815acb9629efccb4ce6f327ef9d13410d8b7a95e423f4faf624b5b18: Status 404 returned error can't find the container with id 0b5d4007815acb9629efccb4ce6f327ef9d13410d8b7a95e423f4faf624b5b18 Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.340142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.340178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.340190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.340207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.340218 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.441365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.441390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.441399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.441411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.441420 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.544515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.544545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.544555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.544570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.544580 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.619448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:56 crc kubenswrapper[4832]: E0312 14:48:56.619588 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.619798 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:56 crc kubenswrapper[4832]: E0312 14:48:56.619954 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.646495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.646575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.646590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.646611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.646626 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.748833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.748868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.748876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.748888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.748897 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.851692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.851746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.851769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.851796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.851817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.954751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.954786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.954795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.954809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:56 crc kubenswrapper[4832]: I0312 14:48:56.954819 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:56Z","lastTransitionTime":"2026-03-12T14:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.008635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.008679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.008693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"0b5d4007815acb9629efccb4ce6f327ef9d13410d8b7a95e423f4faf624b5b18"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.010103 4832 generic.go:334] "Generic (PLEG): container finished" podID="7d5f70b0-6d75-4511-8423-e826258274d1" containerID="39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94" exitCode=0 Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.010152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerDied","Data":"39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.010168 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerStarted","Data":"5acd82c45473390ff436787a7cffb65b722ea104e6a2ee1c8aea4a20ebf90102"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.011788 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p4tdb" event={"ID":"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1","Type":"ContainerStarted","Data":"7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.012938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerStarted","Data":"b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.015251 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f" exitCode=0 Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.015280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.015295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"b621bec7a83b49b63024462040fc9075077debcc3ba22ca56dddbb2ff9e63e94"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.024881 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.049007 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.057249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.057281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.057291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.057306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.057316 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.063216 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.077271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.091338 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.107218 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.125237 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.137321 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.152625 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.160830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.160858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.160865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.160878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.160886 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.162705 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.176706 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.189605 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.211585 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.226640 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.237859 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.252072 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.264046 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.265883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.265930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.265950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.265968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.265985 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.284931 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.298638 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.315030 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.329223 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.341146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.353049 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.363479 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.368599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.368625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.368633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.368645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.368654 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.377454 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.387783 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.391256 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ssdc8"] Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.391615 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.393626 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.393788 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.393876 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.393905 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.404361 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.423539 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.437650 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.449325 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.465807 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.470529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.470567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.470575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.470592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.470600 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.475626 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.482569 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d32edbc-db72-4220-bc60-8675d56c803c-host\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.482781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdcm\" (UniqueName: \"kubernetes.io/projected/7d32edbc-db72-4220-bc60-8675d56c803c-kube-api-access-2hdcm\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.482899 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d32edbc-db72-4220-bc60-8675d56c803c-serviceca\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.488232 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.498315 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.511422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.526583 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.539376 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.553274 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.565391 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.572834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.572878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.572890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.572905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.572917 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.576282 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:57Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.583691 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d32edbc-db72-4220-bc60-8675d56c803c-host\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.583724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdcm\" (UniqueName: \"kubernetes.io/projected/7d32edbc-db72-4220-bc60-8675d56c803c-kube-api-access-2hdcm\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.583748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d32edbc-db72-4220-bc60-8675d56c803c-serviceca\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.583803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d32edbc-db72-4220-bc60-8675d56c803c-host\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.584687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d32edbc-db72-4220-bc60-8675d56c803c-serviceca\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.600225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdcm\" (UniqueName: \"kubernetes.io/projected/7d32edbc-db72-4220-bc60-8675d56c803c-kube-api-access-2hdcm\") pod \"node-ca-ssdc8\" (UID: \"7d32edbc-db72-4220-bc60-8675d56c803c\") " pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.619385 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:57 crc kubenswrapper[4832]: E0312 14:48:57.619474 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.619583 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:48:57 crc kubenswrapper[4832]: E0312 14:48:57.619755 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.675752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.676139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.676156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.676176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.676190 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.777996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.778134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.778196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.778265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.778321 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.877085 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ssdc8" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.879856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.879888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.879897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.879912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.879922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:57 crc kubenswrapper[4832]: W0312 14:48:57.888427 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d32edbc_db72_4220_bc60_8675d56c803c.slice/crio-b2f38ff650f55ee3f34f5869668ea4d00ac3d49b3ad92365bd2ad32b4c4d2cca WatchSource:0}: Error finding container b2f38ff650f55ee3f34f5869668ea4d00ac3d49b3ad92365bd2ad32b4c4d2cca: Status 404 returned error can't find the container with id b2f38ff650f55ee3f34f5869668ea4d00ac3d49b3ad92365bd2ad32b4c4d2cca Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.992750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.992969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.992979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.992995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:57 crc kubenswrapper[4832]: I0312 14:48:57.993006 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:57Z","lastTransitionTime":"2026-03-12T14:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.028557 4832 generic.go:334] "Generic (PLEG): container finished" podID="7d5f70b0-6d75-4511-8423-e826258274d1" containerID="dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df" exitCode=0 Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.028637 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerDied","Data":"dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.029835 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ssdc8" event={"ID":"7d32edbc-db72-4220-bc60-8675d56c803c","Type":"ContainerStarted","Data":"b2f38ff650f55ee3f34f5869668ea4d00ac3d49b3ad92365bd2ad32b4c4d2cca"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.040265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.040311 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.040326 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.040339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.040349 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.041855 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.054793 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.071582 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.088889 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.101355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.101390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.101401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.101417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.101428 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.102331 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.116928 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.137543 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.153339 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.169421 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.178848 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.198459 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.203003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.203041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.203052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.203068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.203080 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.213348 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.229313 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.239345 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:58Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.306014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.306054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.306066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.306083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.306095 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.408298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.408340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.408352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.408369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.408381 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.510302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.510350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.510366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.510393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.510410 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.612368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.612420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.612434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.612453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.612467 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.618982 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.619019 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:48:58 crc kubenswrapper[4832]: E0312 14:48:58.619089 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:48:58 crc kubenswrapper[4832]: E0312 14:48:58.619497 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.714831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.714875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.714886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.714902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.714914 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.817446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.817484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.817495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.817526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.817538 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.920077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.920124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.920134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.920152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:58 crc kubenswrapper[4832]: I0312 14:48:58.920162 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:58Z","lastTransitionTime":"2026-03-12T14:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.022981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.023031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.023041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.023056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.023064 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.048932 4832 generic.go:334] "Generic (PLEG): container finished" podID="7d5f70b0-6d75-4511-8423-e826258274d1" containerID="9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a" exitCode=0 Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.049013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerDied","Data":"9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.053251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ssdc8" event={"ID":"7d32edbc-db72-4220-bc60-8675d56c803c","Type":"ContainerStarted","Data":"30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.062521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.065603 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.090017 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.107069 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.126458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.126549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.126561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.126579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.126593 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.126771 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.137715 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.153554 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.164320 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.183758 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.193761 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.207112 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.218619 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.229127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.229161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.229172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.229187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.229198 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.229596 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.242999 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.256648 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.270250 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.280532 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.293911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.305321 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.320739 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.331449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.331485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.331493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.331521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.331533 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.332988 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.343858 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.354260 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.365028 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.377035 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.387610 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.402061 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.410618 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.421396 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:48:59Z is after 2025-08-24T17:21:41Z" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.434066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.434106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.434119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.434140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.434163 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.536976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.537057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.537083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.537112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.537135 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.619688 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:48:59 crc kubenswrapper[4832]: E0312 14:48:59.619832 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.639999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.640113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.640295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.640313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.640326 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.743011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.743091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.743134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.743155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.743169 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.845238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.845310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.845327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.845350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.845364 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.947266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.947301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.947312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.947325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:48:59 crc kubenswrapper[4832]: I0312 14:48:59.947333 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:48:59Z","lastTransitionTime":"2026-03-12T14:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.050265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.050357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.050381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.050411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.050437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.070761 4832 generic.go:334] "Generic (PLEG): container finished" podID="7d5f70b0-6d75-4511-8423-e826258274d1" containerID="a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c" exitCode=0 Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.070809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerDied","Data":"a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.096549 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.111404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.125186 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.137008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.152047 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.154665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.154693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.154703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.154719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.154731 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.165425 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.180993 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.193984 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.208967 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.222537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.238065 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.250546 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.256871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.256906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.256917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.256934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.256946 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.263726 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.276469 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:00Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.359733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.359774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.359786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.359803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.359815 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.461809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.461847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.461857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.461871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.461880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.563763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.563805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.563820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.563840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.563857 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.619340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.619340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:00 crc kubenswrapper[4832]: E0312 14:49:00.619584 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:00 crc kubenswrapper[4832]: E0312 14:49:00.619695 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.666267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.666330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.666355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.666387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.666408 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.769274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.769326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.769341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.769362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.769378 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.872466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.872573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.872591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.872614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.872633 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.974639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.974722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.974741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.974762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:00 crc kubenswrapper[4832]: I0312 14:49:00.974777 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:00Z","lastTransitionTime":"2026-03-12T14:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.076219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.076292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.076316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.076347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.076371 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.078630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.081787 4832 generic.go:334] "Generic (PLEG): container finished" podID="7d5f70b0-6d75-4511-8423-e826258274d1" containerID="c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9" exitCode=0 Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.081819 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerDied","Data":"c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.105769 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.122239 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.138284 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.151800 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.164929 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.176240 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.179356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.179402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.179420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.179442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.179459 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.185577 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.205671 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.219293 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.229490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.249231 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.258632 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.270092 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.279072 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:01Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.281494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.281536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.281547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.281565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.281578 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.384796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.384850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.384867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.384889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.384908 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.524743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.525037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.525127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.525217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.525297 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.619033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:01 crc kubenswrapper[4832]: E0312 14:49:01.619179 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.627766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.627818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.627832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.627854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.627866 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.730669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.730754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.730768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.730786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.730797 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.832935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.832993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.833005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.833023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.833037 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.936758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.936804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.936817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.936840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:01 crc kubenswrapper[4832]: I0312 14:49:01.936856 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:01Z","lastTransitionTime":"2026-03-12T14:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.039293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.039353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.039377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.039406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.039428 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.089461 4832 generic.go:334] "Generic (PLEG): container finished" podID="7d5f70b0-6d75-4511-8423-e826258274d1" containerID="faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d" exitCode=0 Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.089551 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerDied","Data":"faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.120116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.141338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.141385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.141397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.141414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.141427 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.142784 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.155297 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.167937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.179996 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.188737 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.199124 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.212439 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.231358 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.240404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.243822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.243851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.243859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.243925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.243935 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.252244 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.261252 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.272840 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.282471 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.346871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.346968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.346982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.346997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.347026 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.448989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.449033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.449046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.449062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.449074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.551468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.551518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.551530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.551553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.551568 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.618846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.618846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:02 crc kubenswrapper[4832]: E0312 14:49:02.618983 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:02 crc kubenswrapper[4832]: E0312 14:49:02.619044 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.636650 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.650471 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.653461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.653527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.653541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.653565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.653578 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.664723 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.676422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.689626 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.698772 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.709486 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.718709 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.729701 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.740086 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.755215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.755250 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.755263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.755278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.755289 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.755869 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.765301 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.777918 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.788115 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:02Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.857558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.857704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.857716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.857730 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.857739 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.960269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.960308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.960318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.960331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:02 crc kubenswrapper[4832]: I0312 14:49:02.960341 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:02Z","lastTransitionTime":"2026-03-12T14:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.062311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.062604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.062613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.062628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.062637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.102205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.102691 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.102753 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.107832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" event={"ID":"7d5f70b0-6d75-4511-8423-e826258274d1","Type":"ContainerStarted","Data":"a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.119279 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.133928 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.137779 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.147061 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.160013 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.164978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.165033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.165050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.165074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.165091 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.188485 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.200312 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.214868 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.234355 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.247784 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.259657 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.267543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.267575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.267585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.267600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.267610 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.272681 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.281801 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.296117 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.308544 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.311295 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.320719 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.334375 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.350701 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.364431 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.369184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.369227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.369243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.369265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.369281 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.379323 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.396687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.405759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.405816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.405851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.405873 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.407124 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: E0312 14:49:03.419200 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.420660 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.423198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.423260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.423285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.423312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.423336 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.432176 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: E0312 14:49:03.438482 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.441384 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.441932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.442043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.442102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.442178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.442251 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: E0312 14:49:03.454090 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.455534 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.457636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.457679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.457690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.457706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.457717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.466446 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: E0312 14:49:03.467958 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.471155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.471261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.471326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.471387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.471440 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.477643 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: E0312 14:49:03.480628 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: E0312 14:49:03.480743 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.482207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.482233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.482243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.482257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.482266 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.488596 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.497751 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:03Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.530360 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.584483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.584535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.584546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.584562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.584575 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.619306 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:03 crc kubenswrapper[4832]: E0312 14:49:03.619418 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.686870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.686909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.686918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.686933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.686944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.788626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.788672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.788681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.788697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.788707 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.891023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.892158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.892188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.892209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.892226 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.994939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.994967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.994975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.994986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:03 crc kubenswrapper[4832]: I0312 14:49:03.994996 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:03Z","lastTransitionTime":"2026-03-12T14:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.097741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.097807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.097825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.097855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.097869 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.111775 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.136911 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.150357 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.165814 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.177036 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.187535 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.198390 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.200357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.200393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.200404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.200420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.200428 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.212338 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.222173 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.231736 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.242643 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.253747 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.265801 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.285646 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.297224 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.303356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.303410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.303426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.303449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.303467 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.315906 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:04Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.406720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.406768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.406781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.406799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.406811 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.508750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.508797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.508806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.508820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.508829 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.613007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.613047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.613057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.613069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.613079 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.619407 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:04 crc kubenswrapper[4832]: E0312 14:49:04.619531 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.619594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:04 crc kubenswrapper[4832]: E0312 14:49:04.619747 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.715648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.715698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.715709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.715724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.715735 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.818299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.818325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.818334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.818347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.818356 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.920353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.920940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.920961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.920978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:04 crc kubenswrapper[4832]: I0312 14:49:04.920989 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:04Z","lastTransitionTime":"2026-03-12T14:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.023533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.023571 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.023581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.023596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.023604 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.126875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.126932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.126952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.126976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.126996 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.229320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.229358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.229369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.229385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.229398 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.331498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.331549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.331558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.331570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.331579 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.434280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.434596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.434608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.434625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.434637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.537878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.537955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.537982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.538009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.538029 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.619327 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:05 crc kubenswrapper[4832]: E0312 14:49:05.619659 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.640696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.640753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.640770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.640797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.640816 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.743723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.743789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.743806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.743831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.743847 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.846666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.846713 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.846725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.846744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.846758 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.949993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.950084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.950094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.950114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:05 crc kubenswrapper[4832]: I0312 14:49:05.950125 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:05Z","lastTransitionTime":"2026-03-12T14:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.052754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.052821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.052835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.052852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.052863 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.117544 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/0.log" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.121076 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677" exitCode=1 Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.121115 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.121738 4832 scope.go:117] "RemoveContainer" containerID="112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.135208 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.155859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.155901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.155914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.155932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.155944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.161311 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:05Z\\\",\\\"message\\\":\\\"-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:05.288974 6616 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.288981 6616 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0312 14:49:05.289045 6616 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0312 14:49:05.289073 6616 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0312 14:49:05.289138 6616 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.289169 6616 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.289673 6616 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.290409 6616 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.290665 6616 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.176916 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.192745 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.207818 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.222932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.239400 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.254575 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.264617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.264826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.265143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.265316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.265497 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.271116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.285807 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.296716 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.306535 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.316030 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.329659 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:06Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.369005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.369094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.369110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.369132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.369143 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.471468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.471551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.471567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.471584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.471596 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.574645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.574720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.574733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.574776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.574803 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.618931 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.619052 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:06 crc kubenswrapper[4832]: E0312 14:49:06.619192 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:06 crc kubenswrapper[4832]: E0312 14:49:06.619065 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.677079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.677111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.677121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.677136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.677146 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.779073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.779148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.779175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.779210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.779236 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.881923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.881960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.881969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.881982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.881991 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.984024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.984055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.984063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.984075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:06 crc kubenswrapper[4832]: I0312 14:49:06.984092 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:06Z","lastTransitionTime":"2026-03-12T14:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.086203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.086238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.086246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.086259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.086268 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.126712 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/1.log" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.127625 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/0.log" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.130550 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520" exitCode=1 Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.130605 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.130661 4832 scope.go:117] "RemoveContainer" containerID="112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.131166 4832 scope.go:117] "RemoveContainer" containerID="6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520" Mar 12 14:49:07 crc kubenswrapper[4832]: E0312 14:49:07.131328 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.142496 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.155435 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.164618 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.176262 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.185613 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.189034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.189078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.189089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.189107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.189120 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.196369 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.204611 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.222917 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://112217e914d67ed3187d8080f4157fd54dc5182128c6c30b900e9ec0ae274677\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:05Z\\\",\\\"message\\\":\\\"-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:05.288974 6616 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.288981 6616 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0312 14:49:05.289045 6616 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0312 14:49:05.289073 6616 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0312 14:49:05.289138 6616 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.289169 6616 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.289673 6616 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.290409 6616 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 14:49:05.290665 6616 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.231321 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.242099 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.252650 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.264035 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.275133 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.290490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:07Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.291637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.291677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.291687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.291703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.291714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.394216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.394260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.394305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.394330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.394347 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.496460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.496497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.496525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.496539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.496550 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.598104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.598148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.598158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.598170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.598179 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.618806 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:07 crc kubenswrapper[4832]: E0312 14:49:07.618942 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.702002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.702052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.702063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.702080 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.702093 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.804290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.804325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.804334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.804349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.804360 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.906780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.906818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.906829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.906866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:07 crc kubenswrapper[4832]: I0312 14:49:07.906878 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:07Z","lastTransitionTime":"2026-03-12T14:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.010339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.010410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.010429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.010453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.010479 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.113302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.113379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.113393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.113749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.113790 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.136302 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/1.log" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.140627 4832 scope.go:117] "RemoveContainer" containerID="6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520" Mar 12 14:49:08 crc kubenswrapper[4832]: E0312 14:49:08.140891 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.156198 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.172159 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.193636 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.208000 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.220538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.220622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.220642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.220671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.220690 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.233753 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.246007 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.258393 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.267289 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.277996 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.290933 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.302661 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.314454 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.325653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.325704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.325717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.325733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.325745 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.326695 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.337081 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.429039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.429087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.429103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.429126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.429142 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.532029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.532099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.532124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.532153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.532177 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.564539 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42"] Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.565180 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.567931 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.568679 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.581687 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.599041 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f191cdcc-8d3e-4f37-8cda-a312cac33177-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.599185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f191cdcc-8d3e-4f37-8cda-a312cac33177-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.599231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f191cdcc-8d3e-4f37-8cda-a312cac33177-env-overrides\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.599283 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr6vb\" (UniqueName: \"kubernetes.io/projected/f191cdcc-8d3e-4f37-8cda-a312cac33177-kube-api-access-hr6vb\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.606410 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.619335 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:08 crc kubenswrapper[4832]: E0312 14:49:08.619519 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.619642 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:08 crc kubenswrapper[4832]: E0312 14:49:08.619783 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.628334 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.634420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.634458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.634470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.634485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.634497 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.645202 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.673171 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.692589 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.699785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr6vb\" (UniqueName: \"kubernetes.io/projected/f191cdcc-8d3e-4f37-8cda-a312cac33177-kube-api-access-hr6vb\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.699838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f191cdcc-8d3e-4f37-8cda-a312cac33177-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.699857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f191cdcc-8d3e-4f37-8cda-a312cac33177-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.699875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f191cdcc-8d3e-4f37-8cda-a312cac33177-env-overrides\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.700369 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f191cdcc-8d3e-4f37-8cda-a312cac33177-env-overrides\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.700814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f191cdcc-8d3e-4f37-8cda-a312cac33177-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.707399 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.708873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f191cdcc-8d3e-4f37-8cda-a312cac33177-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.726875 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.728787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr6vb\" (UniqueName: \"kubernetes.io/projected/f191cdcc-8d3e-4f37-8cda-a312cac33177-kube-api-access-hr6vb\") pod \"ovnkube-control-plane-749d76644c-65g42\" (UID: \"f191cdcc-8d3e-4f37-8cda-a312cac33177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.738006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.738056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.738071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.738092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.738106 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.742438 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.771902 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.787800 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.812878 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.827728 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.840893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.840954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.840963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.840978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.840988 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.849948 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.866731 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:08Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.889302 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.943605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.943679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.943698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.943723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:08 crc kubenswrapper[4832]: I0312 14:49:08.943741 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:08Z","lastTransitionTime":"2026-03-12T14:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.047368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.047472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.047488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.047521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.047560 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.142914 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" event={"ID":"f191cdcc-8d3e-4f37-8cda-a312cac33177","Type":"ContainerStarted","Data":"bcb8871ea5fb783f36939e2361397e24b80f12fe31c92c299fa4783d0f584f5e"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.149521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.149563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.149574 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.149589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.149601 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.252615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.252658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.252667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.252682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.252696 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.355089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.355129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.355138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.355153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.355163 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.457333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.457385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.457401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.457419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.457430 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.559783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.559821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.559832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.559846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.559855 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.619062 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:09 crc kubenswrapper[4832]: E0312 14:49:09.619209 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.662125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.662163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.662174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.662187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.662196 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.764830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.764877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.764894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.764919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.764934 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.867457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.867521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.867531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.867547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.867556 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.969985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.970023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.970031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.970048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:09 crc kubenswrapper[4832]: I0312 14:49:09.970057 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:09Z","lastTransitionTime":"2026-03-12T14:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.072493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.072582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.072598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.072621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.072636 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.092864 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lmjrb"] Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.093332 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.093397 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.104932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.116299 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.129649 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.142974 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.148134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" event={"ID":"f191cdcc-8d3e-4f37-8cda-a312cac33177","Type":"ContainerStarted","Data":"e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.148200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" event={"ID":"f191cdcc-8d3e-4f37-8cda-a312cac33177","Type":"ContainerStarted","Data":"1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.155635 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.172259 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.175156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.175199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.175210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.175227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.175239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.183378 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.196224 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.207371 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.217998 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtvv\" (UniqueName: \"kubernetes.io/projected/c3abc18e-3b7e-4afe-b35b-3b619290e875-kube-api-access-wgtvv\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.218043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.218592 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.227948 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.241167 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.252787 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.267313 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.277780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.277832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.277846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.277868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.277883 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.285318 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.299650 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.317796 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.319251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtvv\" (UniqueName: \"kubernetes.io/projected/c3abc18e-3b7e-4afe-b35b-3b619290e875-kube-api-access-wgtvv\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.319312 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.319444 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.319585 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:10.819542668 +0000 UTC m=+109.463556914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.329893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.338737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtvv\" (UniqueName: \"kubernetes.io/projected/c3abc18e-3b7e-4afe-b35b-3b619290e875-kube-api-access-wgtvv\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.350589 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.366199 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.380617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.380645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.380653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.380681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.380690 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.387452 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.399752 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.409902 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.419940 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.420224 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:49:42.420194844 +0000 UTC m=+141.064209060 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.424107 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.436126 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.447592 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.457585 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.468487 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.476589 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.482805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.482854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.482870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.482891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.482906 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.486570 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.498943 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.507530 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:10Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.521059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.521131 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.521168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521223 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521272 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521290 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521301 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.521243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521308 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:42.52129211 +0000 UTC m=+141.165306326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521349 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521358 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:42.521346561 +0000 UTC m=+141.165360787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521399 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521415 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:42.521402023 +0000 UTC m=+141.165416249 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521426 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521444 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.521553 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:42.521492535 +0000 UTC m=+141.165506801 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.585822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.585870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.585881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.585898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.585910 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.618751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.618750 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.618904 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.619025 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.619577 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.688700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.688751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.688766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.688784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.688799 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.791141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.791173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.791182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.791195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.791204 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.824097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.824287 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: E0312 14:49:10.824363 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:11.824344753 +0000 UTC m=+110.468358979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.893380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.893414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.893434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.893448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.893460 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.995354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.995380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.995388 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.995400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:10 crc kubenswrapper[4832]: I0312 14:49:10.995408 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:10Z","lastTransitionTime":"2026-03-12T14:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.098069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.098120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.098136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.098157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.098173 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.153901 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.156426 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.157706 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.173861 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.190141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.200687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.200733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.200746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.200764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.200778 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.212001 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.224497 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.255873 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.270173 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.288038 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.303388 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.303422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.303431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.303444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.303454 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.304054 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.317859 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.332091 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.345421 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.359038 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.369624 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.380688 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.396681 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.405778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.405826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.405836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.405850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.405859 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.408276 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:11Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.507871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.508151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.508274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.508373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.508471 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.610717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.610998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.611174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.611258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.611334 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.619135 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:11 crc kubenswrapper[4832]: E0312 14:49:11.619292 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.619667 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:11 crc kubenswrapper[4832]: E0312 14:49:11.619836 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.713962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.714008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.714025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.714049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.714067 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.817203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.817270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.817296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.817328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.817349 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.836973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:11 crc kubenswrapper[4832]: E0312 14:49:11.837138 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:11 crc kubenswrapper[4832]: E0312 14:49:11.837204 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:13.837188284 +0000 UTC m=+112.481202510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.919849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.920124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.920191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.920312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:11 crc kubenswrapper[4832]: I0312 14:49:11.920403 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:11Z","lastTransitionTime":"2026-03-12T14:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.023195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.023606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.023750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.023919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.024078 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.127587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.127627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.127643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.127661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.127673 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.230557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.230622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.230640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.230663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.230680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.333784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.333818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.333829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.333844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.333855 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.436347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.436454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.436473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.436495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.436539 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.539964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.540046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.540065 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.540092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.540111 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.620027 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:12 crc kubenswrapper[4832]: E0312 14:49:12.620212 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.620309 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:12 crc kubenswrapper[4832]: E0312 14:49:12.620530 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.637807 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.642172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.642247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.642272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.642302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.642324 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.659763 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.678498 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.695406 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.709970 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.738455 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.745007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.745040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.745049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.745063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.745073 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.750835 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.772598 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.786011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.798642 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.809326 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.820564 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.830892 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.842478 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.847298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.847344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.847357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.847371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.847381 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.857248 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.869808 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:12Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.950034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.950068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.950082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.950102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:12 crc kubenswrapper[4832]: I0312 14:49:12.950114 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:12Z","lastTransitionTime":"2026-03-12T14:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.053460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.053503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.053534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.053551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.053562 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.157279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.157345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.157368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.157399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.157422 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.260835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.260881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.260891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.260906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.260917 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.363132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.363187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.363202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.363221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.363235 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.465228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.465279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.465293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.465313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.465328 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.532214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.532268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.532280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.532296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.532307 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.546891 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:13Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.552155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.552200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.552216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.552236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.552251 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.570105 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:13Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.573575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.573611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.573623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.573636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.573644 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.584481 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:13Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.588374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.588414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.588425 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.588443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.588457 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.604498 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:13Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.609066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.609108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.609117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.609130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.609139 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.618826 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.618874 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.618938 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.619024 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.620374 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:13Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.621205 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.622698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.622727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.622738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.622752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.622764 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.727386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.727460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.727482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.727547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.727570 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.830533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.830569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.830578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.830593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.830603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.855158 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.855314 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:13 crc kubenswrapper[4832]: E0312 14:49:13.855392 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:17.855374756 +0000 UTC m=+116.499388982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.933082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.933136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.933149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.933168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:13 crc kubenswrapper[4832]: I0312 14:49:13.933181 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:13Z","lastTransitionTime":"2026-03-12T14:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.036191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.036266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.036288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.036315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.036337 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.139373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.139441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.139460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.139485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.139535 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.242069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.242122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.242133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.242150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.242164 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.344722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.344764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.344775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.344791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.344803 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.447257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.447312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.447328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.447348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.447361 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.550316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.550370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.550386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.550407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.550426 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.618972 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.619071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:14 crc kubenswrapper[4832]: E0312 14:49:14.619114 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:14 crc kubenswrapper[4832]: E0312 14:49:14.619397 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.631818 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.653285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.653319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.653334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.653350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.653360 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.755485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.755536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.755547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.755563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.755574 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.858487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.858540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.858550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.858566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.858576 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.961497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.961571 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.961581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.961595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:14 crc kubenswrapper[4832]: I0312 14:49:14.961605 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:14Z","lastTransitionTime":"2026-03-12T14:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.065280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.065351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.065371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.065398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.065419 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.168741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.168800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.168811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.168828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.168842 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.270958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.270994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.271005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.271021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.271032 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.372608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.372645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.372656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.372674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.372686 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.503656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.503969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.504058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.504138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.504212 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.606987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.607020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.607031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.607046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.607057 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.619206 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.619306 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:15 crc kubenswrapper[4832]: E0312 14:49:15.619424 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:15 crc kubenswrapper[4832]: E0312 14:49:15.619580 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.710288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.710315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.710324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.710336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.710345 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.813941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.813997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.814015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.814038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.814054 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.917152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.917211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.917228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.917252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:15 crc kubenswrapper[4832]: I0312 14:49:15.917273 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:15Z","lastTransitionTime":"2026-03-12T14:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.020122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.020337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.020400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.020457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.020548 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.209060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.209104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.209120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.209140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.209155 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.311941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.312031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.312059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.312089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.312111 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.414621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.414667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.414680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.414696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.414708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.517469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.517517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.517529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.517543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.517556 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.619122 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.619218 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:16 crc kubenswrapper[4832]: E0312 14:49:16.619312 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:16 crc kubenswrapper[4832]: E0312 14:49:16.619410 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.620225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.620441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.620664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.620840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.621020 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.723484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.723745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.723838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.723932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.724016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.826697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.826726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.826736 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.826750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.826761 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.929156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.929284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.929308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.929338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:16 crc kubenswrapper[4832]: I0312 14:49:16.929358 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:16Z","lastTransitionTime":"2026-03-12T14:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.032174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.032466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.032665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.032756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.032852 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.135861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.136165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.136251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.136340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.136424 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.239133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.239177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.239192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.239221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.239237 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.342148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.342187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.342198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.342214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.342225 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.445356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.445396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.445410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.445426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.445437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.548954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.549003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.549020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.549045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.549062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.619338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.619416 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:17 crc kubenswrapper[4832]: E0312 14:49:17.619986 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:17 crc kubenswrapper[4832]: E0312 14:49:17.620168 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.651264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.651308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.651326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.651344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.651357 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.754394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.754427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.754437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.754455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.754467 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.857479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.857963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.858143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.858394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.858634 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.924081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:17 crc kubenswrapper[4832]: E0312 14:49:17.924314 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:17 crc kubenswrapper[4832]: E0312 14:49:17.924412 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:25.924389317 +0000 UTC m=+124.568403543 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.960995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.961280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.961398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.961575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:17 crc kubenswrapper[4832]: I0312 14:49:17.961697 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:17Z","lastTransitionTime":"2026-03-12T14:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.065966 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.066362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.066558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.066801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.066829 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.169957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.170012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.170029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.170053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.170083 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.273666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.273733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.273756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.273784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.273806 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.376660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.376739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.376761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.376789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.376808 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.479281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.479334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.479344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.479360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.479372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.582835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.582897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.582915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.582940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.582955 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.619606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.619666 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:18 crc kubenswrapper[4832]: E0312 14:49:18.619831 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:18 crc kubenswrapper[4832]: E0312 14:49:18.619997 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.686605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.686678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.686702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.686734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.686757 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.790161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.790205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.790217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.790232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.790245 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.893589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.893676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.893689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.893709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.893720 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.996172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.996217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.996229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.996244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:18 crc kubenswrapper[4832]: I0312 14:49:18.996256 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:18Z","lastTransitionTime":"2026-03-12T14:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.098650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.098686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.098695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.098709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.098718 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.202345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.202389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.202399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.202415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.202426 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.305437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.305524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.305537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.305555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.305567 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.408117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.408171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.408183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.408202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.408215 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.511147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.511214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.511233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.511256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.511275 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.613910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.613985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.614010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.614036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.614054 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.619399 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.619455 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:19 crc kubenswrapper[4832]: E0312 14:49:19.619550 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:19 crc kubenswrapper[4832]: E0312 14:49:19.619680 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.717823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.717937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.717964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.717991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.718010 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.821785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.821832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.821843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.821862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.821875 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.925178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.925273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.925302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.925333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:19 crc kubenswrapper[4832]: I0312 14:49:19.925350 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:19Z","lastTransitionTime":"2026-03-12T14:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.028536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.028596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.028608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.028631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.028644 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.131869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.132093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.132117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.132144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.132162 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.235561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.235660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.235687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.235716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.235733 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.338956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.338997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.339029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.339047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.339056 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.441965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.442017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.442035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.442056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.442073 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.545314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.545357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.545371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.545393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.545410 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.619351 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.619339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:20 crc kubenswrapper[4832]: E0312 14:49:20.619606 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:20 crc kubenswrapper[4832]: E0312 14:49:20.619781 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.648191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.648256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.648272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.648293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.648307 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.751569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.751645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.751664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.751692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.751711 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.854283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.854342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.854354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.854377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.854391 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.957688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.958050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.958151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.958243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:20 crc kubenswrapper[4832]: I0312 14:49:20.958332 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:20Z","lastTransitionTime":"2026-03-12T14:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.061058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.061280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.061389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.061473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.061584 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.164673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.165110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.165255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.165397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.165671 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.268209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.268242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.268255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.268271 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.268285 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.370599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.370821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.370899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.370996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.371095 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.474280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.474310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.474321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.474357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.474371 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.578107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.578581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.578808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.578989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.579159 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.619283 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:21 crc kubenswrapper[4832]: E0312 14:49:21.619442 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.620150 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:21 crc kubenswrapper[4832]: E0312 14:49:21.620371 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.620766 4832 scope.go:117] "RemoveContainer" containerID="6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.683086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.683456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.683474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.683526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.683548 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.786910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.786958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.786973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.786994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.787010 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.889172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.889410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.889426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.889445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.889591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.991435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.991464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.991473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.991485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:21 crc kubenswrapper[4832]: I0312 14:49:21.991494 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:21Z","lastTransitionTime":"2026-03-12T14:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.093821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.093856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.093865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.093877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.093886 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:22Z","lastTransitionTime":"2026-03-12T14:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.196187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.196216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.196225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.196243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.196251 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:22Z","lastTransitionTime":"2026-03-12T14:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.231160 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/1.log" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.235203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52"} Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.235749 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.253215 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.263711 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.279285 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.289124 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.298318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.298570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.298660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.298766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.298864 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:22Z","lastTransitionTime":"2026-03-12T14:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.306838 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.319629 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.328810 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.339253 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.352441 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.367069 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.381926 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.395712 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.401581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.401643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.401667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.401691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.401707 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:22Z","lastTransitionTime":"2026-03-12T14:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.410974 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.426320 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.442683 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.457804 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.481031 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.503914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.503952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.503962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.503980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.503991 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:22Z","lastTransitionTime":"2026-03-12T14:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:22 crc kubenswrapper[4832]: E0312 14:49:22.604244 4832 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.619612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.619635 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:22 crc kubenswrapper[4832]: E0312 14:49:22.619986 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:22 crc kubenswrapper[4832]: E0312 14:49:22.619986 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.633206 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.647940 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.661105 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.675930 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.693692 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.715487 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.728549 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.752217 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: E0312 14:49:22.761188 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.769186 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.781321 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.794939 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.810792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.832850 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.847682 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.863281 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.883996 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:22 crc kubenswrapper[4832]: I0312 14:49:22.896767 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:22Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.242189 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/2.log" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.243807 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/1.log" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.248311 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52" exitCode=1 Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.248380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52"} Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.248438 4832 scope.go:117] "RemoveContainer" containerID="6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.249619 4832 scope.go:117] "RemoveContainer" containerID="a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52" Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.249958 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.286753 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.302307 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.317066 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.335584 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.351071 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.384020 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.396612 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.411961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.432478 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.451443 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.468750 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.482795 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.497363 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.513907 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.529154 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.540930 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.544618 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.557717 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.571096 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.585848 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.597041 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.606887 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.619058 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.619168 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.619236 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.619298 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.622330 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.627151 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.639470 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.654211 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.667301 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.688015 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f9a1f4ad2ac8445074891e668f1d0cab1615e5a93a4246f8b97c9bc23bfa520\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:07Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0312 14:49:06.927851 6776 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.927964 6776 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928120 6776 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 14:49:06.928393 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:06.928447 6776 factory.go:656] Stopping watch factory\\\\nI0312 14:49:06.928459 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:06.934427 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0312 14:49:06.934453 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0312 14:49:06.934488 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:06.934532 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:06.934600 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.701237 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.729603 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.747364 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.758123 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.771993 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.786727 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.804458 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.823142 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.840716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.840760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.840774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.840791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.840802 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:23Z","lastTransitionTime":"2026-03-12T14:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.854417 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.858515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.858723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.858798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.858888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.858973 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:23Z","lastTransitionTime":"2026-03-12T14:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.870279 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.874120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.874162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.874174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.874190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.874201 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:23Z","lastTransitionTime":"2026-03-12T14:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.891579 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.895947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.896001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.896011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.896032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.896045 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:23Z","lastTransitionTime":"2026-03-12T14:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.919902 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.924212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.924249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.924257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.924273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.924284 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:23Z","lastTransitionTime":"2026-03-12T14:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.936031 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:23Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:23 crc kubenswrapper[4832]: E0312 14:49:23.936200 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:49:23 crc kubenswrapper[4832]: I0312 14:49:23.973606 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.253058 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/2.log" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.256530 4832 scope.go:117] "RemoveContainer" containerID="a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52" Mar 12 14:49:24 crc kubenswrapper[4832]: E0312 14:49:24.256655 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.274402 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.292721 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.308126 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.319431 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.336838 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.358273 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.373084 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.384603 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.407634 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.424884 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.437387 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.449582 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.469766 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.485089 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.514406 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.529857 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.546006 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.564710 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:24Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.618922 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:24 crc kubenswrapper[4832]: E0312 14:49:24.619055 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:24 crc kubenswrapper[4832]: I0312 14:49:24.618929 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:24 crc kubenswrapper[4832]: E0312 14:49:24.619280 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:25 crc kubenswrapper[4832]: I0312 14:49:25.618897 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:25 crc kubenswrapper[4832]: I0312 14:49:25.619008 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:25 crc kubenswrapper[4832]: E0312 14:49:25.619034 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:25 crc kubenswrapper[4832]: E0312 14:49:25.619223 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:25 crc kubenswrapper[4832]: I0312 14:49:25.998843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:25 crc kubenswrapper[4832]: E0312 14:49:25.998967 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:25 crc kubenswrapper[4832]: E0312 14:49:25.999026 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:41.999011175 +0000 UTC m=+140.643025391 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:26 crc kubenswrapper[4832]: I0312 14:49:26.618995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:26 crc kubenswrapper[4832]: I0312 14:49:26.619100 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:26 crc kubenswrapper[4832]: E0312 14:49:26.619582 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:26 crc kubenswrapper[4832]: E0312 14:49:26.619760 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:27 crc kubenswrapper[4832]: I0312 14:49:27.618990 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:27 crc kubenswrapper[4832]: E0312 14:49:27.619191 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:27 crc kubenswrapper[4832]: I0312 14:49:27.619275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:27 crc kubenswrapper[4832]: E0312 14:49:27.619467 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:27 crc kubenswrapper[4832]: E0312 14:49:27.763057 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:28 crc kubenswrapper[4832]: I0312 14:49:28.619254 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:28 crc kubenswrapper[4832]: E0312 14:49:28.619490 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:28 crc kubenswrapper[4832]: I0312 14:49:28.619254 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:28 crc kubenswrapper[4832]: E0312 14:49:28.619716 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:29 crc kubenswrapper[4832]: I0312 14:49:29.619075 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:29 crc kubenswrapper[4832]: I0312 14:49:29.619122 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:29 crc kubenswrapper[4832]: E0312 14:49:29.619263 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:29 crc kubenswrapper[4832]: E0312 14:49:29.619373 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:30 crc kubenswrapper[4832]: I0312 14:49:30.619683 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:30 crc kubenswrapper[4832]: I0312 14:49:30.619747 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:30 crc kubenswrapper[4832]: E0312 14:49:30.619823 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:30 crc kubenswrapper[4832]: E0312 14:49:30.619904 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:31 crc kubenswrapper[4832]: I0312 14:49:31.618743 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:31 crc kubenswrapper[4832]: E0312 14:49:31.618938 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:31 crc kubenswrapper[4832]: I0312 14:49:31.619122 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:31 crc kubenswrapper[4832]: E0312 14:49:31.619247 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.618887 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.618971 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:32 crc kubenswrapper[4832]: E0312 14:49:32.619063 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:32 crc kubenswrapper[4832]: E0312 14:49:32.619179 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.640429 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.661858 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.679689 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.696490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.714820 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.738684 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.758664 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: E0312 14:49:32.763751 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.780783 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.807961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.826211 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.843467 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.854587 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.873878 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.886359 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.908449 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.922027 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.940374 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:32 crc kubenswrapper[4832]: I0312 14:49:32.951864 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:32Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:33 crc kubenswrapper[4832]: I0312 14:49:33.618959 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:33 crc kubenswrapper[4832]: I0312 14:49:33.618959 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:33 crc kubenswrapper[4832]: E0312 14:49:33.619164 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:33 crc kubenswrapper[4832]: E0312 14:49:33.619248 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.130766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.130864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.130896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.130927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.130950 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:34Z","lastTransitionTime":"2026-03-12T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.154356 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:34Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.161090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.161162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.161181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.161206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.161224 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:34Z","lastTransitionTime":"2026-03-12T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.180468 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:34Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.185836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.185896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.185921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.185950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.185971 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:34Z","lastTransitionTime":"2026-03-12T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.205413 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:34Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.210214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.210275 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.210294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.210317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.210334 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:34Z","lastTransitionTime":"2026-03-12T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.231318 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:34Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.235654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.235734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.235751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.235772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.235787 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:34Z","lastTransitionTime":"2026-03-12T14:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.252331 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:34Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.252481 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.618996 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.619141 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.618993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:34 crc kubenswrapper[4832]: E0312 14:49:34.619543 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:34 crc kubenswrapper[4832]: I0312 14:49:34.629989 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 12 14:49:35 crc kubenswrapper[4832]: I0312 14:49:35.619402 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:35 crc kubenswrapper[4832]: I0312 14:49:35.619409 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:35 crc kubenswrapper[4832]: E0312 14:49:35.619557 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:35 crc kubenswrapper[4832]: E0312 14:49:35.619611 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:36 crc kubenswrapper[4832]: I0312 14:49:36.619082 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:36 crc kubenswrapper[4832]: I0312 14:49:36.619082 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:36 crc kubenswrapper[4832]: E0312 14:49:36.619428 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:36 crc kubenswrapper[4832]: E0312 14:49:36.619284 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:37 crc kubenswrapper[4832]: I0312 14:49:37.619102 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:37 crc kubenswrapper[4832]: I0312 14:49:37.619175 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:37 crc kubenswrapper[4832]: E0312 14:49:37.619265 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:37 crc kubenswrapper[4832]: E0312 14:49:37.619313 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:37 crc kubenswrapper[4832]: E0312 14:49:37.764788 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:38 crc kubenswrapper[4832]: I0312 14:49:38.619753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:38 crc kubenswrapper[4832]: I0312 14:49:38.619936 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:38 crc kubenswrapper[4832]: E0312 14:49:38.620199 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:38 crc kubenswrapper[4832]: E0312 14:49:38.620718 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:38 crc kubenswrapper[4832]: I0312 14:49:38.620963 4832 scope.go:117] "RemoveContainer" containerID="a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52" Mar 12 14:49:38 crc kubenswrapper[4832]: E0312 14:49:38.621229 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:49:39 crc kubenswrapper[4832]: I0312 14:49:39.619338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:39 crc kubenswrapper[4832]: I0312 14:49:39.619350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:39 crc kubenswrapper[4832]: E0312 14:49:39.619592 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:39 crc kubenswrapper[4832]: E0312 14:49:39.619725 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:40 crc kubenswrapper[4832]: I0312 14:49:40.618977 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:40 crc kubenswrapper[4832]: E0312 14:49:40.619215 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:40 crc kubenswrapper[4832]: I0312 14:49:40.619632 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:40 crc kubenswrapper[4832]: E0312 14:49:40.619993 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:41 crc kubenswrapper[4832]: I0312 14:49:41.619731 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:41 crc kubenswrapper[4832]: I0312 14:49:41.619803 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:41 crc kubenswrapper[4832]: E0312 14:49:41.619924 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:41 crc kubenswrapper[4832]: E0312 14:49:41.620588 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.089431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.089639 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.089744 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:14.089719509 +0000 UTC m=+172.733733775 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.492995 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.493185 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:46.49315667 +0000 UTC m=+205.137170936 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.594016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.594354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594403 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594438 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594458 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594460 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594558 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:46.594534121 +0000 UTC m=+205.238548377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594591 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:46.594571742 +0000 UTC m=+205.238586008 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.594639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.594696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594809 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594856 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:46.594840149 +0000 UTC m=+205.238854415 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.594948 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.595001 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.595023 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.595121 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:46.595087966 +0000 UTC m=+205.239102232 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.619187 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.619288 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.619352 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.619543 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.639184 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.655907 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.678223 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1a5946-d90a-46ec-a25c-86027295b8e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:47:54Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 14:47:29.072639 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 14:47:29.075979 1 observer_polling.go:159] Starting file observer\\\\nI0312 14:47:29.147657 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 14:47:29.160881 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 14:47:54.299844 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 14:47:54.299949 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.697001 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.725281 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.747051 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.764666 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: E0312 14:49:42.765430 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.798172 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.809893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.842926 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.858341 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.877415 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.889962 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.906312 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.923685 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.940381 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.958370 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.975302 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:42 crc kubenswrapper[4832]: I0312 14:49:42.989292 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:42Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.322944 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/0.log" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.323030 4832 generic.go:334] "Generic (PLEG): container finished" podID="7c82e050-0168-4210-bb2d-7d8bbbc5e74e" containerID="b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540" exitCode=1 Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.323069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerDied","Data":"b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540"} Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.323646 4832 scope.go:117] "RemoveContainer" containerID="b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.346328 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.363869 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.385482 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1a5946-d90a-46ec-a25c-86027295b8e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:47:54Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 14:47:29.072639 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 14:47:29.075979 1 observer_polling.go:159] Starting file observer\\\\nI0312 14:47:29.147657 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 14:47:29.160881 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 14:47:54.299844 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 14:47:54.299949 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.405354 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.428335 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.449144 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:42Z\\\",\\\"message\\\":\\\"2026-03-12T14:48:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a\\\\n2026-03-12T14:48:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a to /host/opt/cni/bin/\\\\n2026-03-12T14:48:57Z [verbose] multus-daemon started\\\\n2026-03-12T14:48:57Z [verbose] Readiness Indicator file check\\\\n2026-03-12T14:49:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.468723 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.489456 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.499648 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.524620 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.537732 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.550782 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.561825 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.573344 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.591886 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.609400 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.619637 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.619669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:43 crc kubenswrapper[4832]: E0312 14:49:43.619767 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:43 crc kubenswrapper[4832]: E0312 14:49:43.620138 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.631457 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.651928 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:43 crc kubenswrapper[4832]: I0312 14:49:43.673339 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:43Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.314011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.314079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.314099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.314125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.314144 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:44Z","lastTransitionTime":"2026-03-12T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.331941 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/0.log" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.332027 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerStarted","Data":"3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0"} Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.338002 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.342022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.342069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.342081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.342096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.342106 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:44Z","lastTransitionTime":"2026-03-12T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.354740 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.356840 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.360959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.360983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.360992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.361003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.361013 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:44Z","lastTransitionTime":"2026-03-12T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.374329 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.374770 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.378719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.378794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.378812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.378836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.378852 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:44Z","lastTransitionTime":"2026-03-12T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.386125 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.391484 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.395218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.395299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.395321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.395347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.395367 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:44Z","lastTransitionTime":"2026-03-12T14:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.404460 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:42Z\\\",\\\"message\\\":\\\"2026-03-12T14:48:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a\\\\n2026-03-12T14:48:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a to /host/opt/cni/bin/\\\\n2026-03-12T14:48:57Z [verbose] multus-daemon started\\\\n2026-03-12T14:48:57Z [verbose] Readiness Indicator file check\\\\n2026-03-12T14:49:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.410429 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.410601 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.418376 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.439405 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.449573 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.466401 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.479793 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.493979 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.511775 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.528271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.546643 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.565194 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.586165 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1a5946-d90a-46ec-a25c-86027295b8e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:47:54Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 14:47:29.072639 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 14:47:29.075979 1 observer_polling.go:159] Starting file observer\\\\nI0312 14:47:29.147657 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 14:47:29.160881 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 14:47:54.299844 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 14:47:54.299949 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.602773 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.619695 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.619831 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.619920 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:44 crc kubenswrapper[4832]: E0312 14:49:44.620043 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.625185 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.641247 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:44 crc kubenswrapper[4832]: I0312 14:49:44.657477 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:44Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:45 crc kubenswrapper[4832]: I0312 14:49:45.618979 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:45 crc kubenswrapper[4832]: I0312 14:49:45.619053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:45 crc kubenswrapper[4832]: E0312 14:49:45.619104 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:45 crc kubenswrapper[4832]: E0312 14:49:45.619683 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:46 crc kubenswrapper[4832]: I0312 14:49:46.619187 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:46 crc kubenswrapper[4832]: I0312 14:49:46.619249 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:46 crc kubenswrapper[4832]: E0312 14:49:46.619417 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:46 crc kubenswrapper[4832]: E0312 14:49:46.619579 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:47 crc kubenswrapper[4832]: I0312 14:49:47.618954 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:47 crc kubenswrapper[4832]: I0312 14:49:47.619020 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:47 crc kubenswrapper[4832]: E0312 14:49:47.619123 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:47 crc kubenswrapper[4832]: E0312 14:49:47.619254 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:47 crc kubenswrapper[4832]: E0312 14:49:47.767602 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:48 crc kubenswrapper[4832]: I0312 14:49:48.619290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:48 crc kubenswrapper[4832]: I0312 14:49:48.619353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:48 crc kubenswrapper[4832]: E0312 14:49:48.619474 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:48 crc kubenswrapper[4832]: E0312 14:49:48.619582 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:49 crc kubenswrapper[4832]: I0312 14:49:49.618783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:49 crc kubenswrapper[4832]: I0312 14:49:49.618866 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:49 crc kubenswrapper[4832]: E0312 14:49:49.618971 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:49 crc kubenswrapper[4832]: E0312 14:49:49.619386 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:49 crc kubenswrapper[4832]: I0312 14:49:49.619729 4832 scope.go:117] "RemoveContainer" containerID="a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.359028 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/2.log" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.361687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.362101 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.376837 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.389799 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.401814 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.416582 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.429980 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.440931 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.449466 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.459783 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1a5946-d90a-46ec-a25c-86027295b8e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:47:54Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 14:47:29.072639 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 14:47:29.075979 1 observer_polling.go:159] Starting file observer\\\\nI0312 14:47:29.147657 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 14:47:29.160881 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 14:47:54.299844 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 14:47:54.299949 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.469951 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.478089 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.488440 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:42Z\\\",\\\"message\\\":\\\"2026-03-12T14:48:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a\\\\n2026-03-12T14:48:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a to /host/opt/cni/bin/\\\\n2026-03-12T14:48:57Z [verbose] multus-daemon started\\\\n2026-03-12T14:48:57Z [verbose] Readiness Indicator file check\\\\n2026-03-12T14:49:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.496893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.557290 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.571854 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.591574 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.600983 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.608891 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.619247 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.620433 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.620565 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:50 crc kubenswrapper[4832]: E0312 14:49:50.620618 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:50 crc kubenswrapper[4832]: E0312 14:49:50.620661 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:50 crc kubenswrapper[4832]: I0312 14:49:50.629362 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:50Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.368153 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/3.log" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.368995 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/2.log" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.371594 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" exitCode=1 Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.371643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.371698 4832 scope.go:117] "RemoveContainer" containerID="a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.372958 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 14:49:51 crc kubenswrapper[4832]: E0312 14:49:51.373281 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.391018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.667641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:51 crc kubenswrapper[4832]: E0312 14:49:51.667858 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.668231 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:51 crc kubenswrapper[4832]: E0312 14:49:51.668368 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.668420 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:51 crc kubenswrapper[4832]: E0312 14:49:51.668615 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.672480 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.684526 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.697619 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.708010 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.721456 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1a5946-d90a-46ec-a25c-86027295b8e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:47:54Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 14:47:29.072639 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 14:47:29.075979 1 observer_polling.go:159] Starting file observer\\\\nI0312 14:47:29.147657 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 14:47:29.160881 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 14:47:54.299844 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 14:47:54.299949 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.732605 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.745384 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.756359 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:42Z\\\",\\\"message\\\":\\\"2026-03-12T14:48:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a\\\\n2026-03-12T14:48:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a to /host/opt/cni/bin/\\\\n2026-03-12T14:48:57Z [verbose] multus-daemon started\\\\n2026-03-12T14:48:57Z [verbose] Readiness Indicator file check\\\\n2026-03-12T14:49:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.765646 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.782883 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a694a1adaad707a20f27754940e7cf7601ce8a853794fe1a66d854f0bcb79b52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:22Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461258 7052 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0312 14:49:22.461216 7052 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:22.461348 7052 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0312 14:49:22.461394 7052 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:22.461430 7052 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:22.461523 7052 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:50Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:50.526618 7344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0312 14:49:50.526652 7344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:50.526682 7344 factory.go:656] Stopping watch factory\\\\nI0312 14:49:50.526697 7344 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:50.526718 7344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0312 14:49:50.526729 7344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:50.526740 7344 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:50.526815 7344 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.792571 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.809059 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.819315 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.830990 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.840603 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.853617 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.868737 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:51 crc kubenswrapper[4832]: I0312 14:49:51.884305 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:51Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.381690 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/3.log" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.386081 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 14:49:52 crc kubenswrapper[4832]: E0312 14:49:52.386294 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.407288 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.421661 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.434785 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.445916 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.458612 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1a5946-d90a-46ec-a25c-86027295b8e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:47:54Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 14:47:29.072639 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 14:47:29.075979 1 observer_polling.go:159] Starting file observer\\\\nI0312 14:47:29.147657 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 14:47:29.160881 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 14:47:54.299844 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 14:47:54.299949 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.468806 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.481971 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.493521 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.505049 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.524970 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:50Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:50.526618 7344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0312 14:49:50.526652 7344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:50.526682 7344 factory.go:656] Stopping watch factory\\\\nI0312 14:49:50.526697 7344 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:50.526718 7344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0312 14:49:50.526729 7344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:50.526740 7344 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:50.526815 7344 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.535717 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.565520 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.577549 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.588167 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.596192 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.607485 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:42Z\\\",\\\"message\\\":\\\"2026-03-12T14:48:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a\\\\n2026-03-12T14:48:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a to /host/opt/cni/bin/\\\\n2026-03-12T14:48:57Z [verbose] multus-daemon started\\\\n2026-03-12T14:48:57Z [verbose] Readiness Indicator file check\\\\n2026-03-12T14:49:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.616345 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.618718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:52 crc kubenswrapper[4832]: E0312 14:49:52.618820 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.629939 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.641875 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.652880 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c62aa7e-9fce-4677-b6bc-beb87644af0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b6925c96bb8f6caaade8df9b8678c9736174447869519a099be13eb9bd2254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9f5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kdl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.673362 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cc235e-1890-485d-8ca2-bf03b2006ab9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:50Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 14:49:50.526618 7344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0312 14:49:50.526652 7344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 14:49:50.526682 7344 factory.go:656] Stopping watch factory\\\\nI0312 14:49:50.526697 7344 ovnkube.go:599] Stopped ovnkube\\\\nI0312 14:49:50.526718 7344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0312 14:49:50.526729 7344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 14:49:50.526740 7344 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 14:49:50.526815 7344 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knqcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5zjpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.684232 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ssdc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d32edbc-db72-4220-bc60-8675d56c803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30b7622f6561ec1111d3e8463a889630027066203b402572a8255fc709b15b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ssdc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.706834 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6ee5207-3aaa-4052-8b52-5147b990fb5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73ce28d9a27fe64b6fd0a13696d453aa22ae968db063da8d9c1cefff1b6ed7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc3ac75fa8e4697061b914425ae2471c6bfbc699ede95831627d8a80df3418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7bc318ab91bd7950b647065c452bd0e38608a4cec73140dc6ffdae5e70bad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8355918ef44f602e4aed6dae9b5c26ba632d93685c715c8d458c6817a2c41c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa7479f99bdfa90e500450c70c2b1d64b7bbd15e7bf649cc889bab78e70ee896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d639ecc92c654b3fa27465e4b3b04f5c3667719add2df9de6bb4f6977f81a43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://587420097bc6f525c25bfe92ba6ec620930ad091812485c1021f29c2a44144e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2a3e18a2c885ddd82878cf68724553f974f6c414ddf698f637698136d4da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.722801 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4271564-f655-4aa3-b732-45cbda4401a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://440f6901fa28a9d67506031e20032120e27453bd4a3f0d7a6d0a61972050e8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://329f16cd40740ccf2e50f78cfe5eb2d7cf8c47e37447f97e40259aedda51ea86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6c103b990f3bedc31a6be395538872d3a80f0ad18990155f3c9ae7388e1138d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae2dfd33f7988a998cfc6dfe268d57667985cc599ef321f1bac8fffacb4d2029\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.738025 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.752789 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4tdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1e1caf-52e7-49bf-acce-ab7fbd84e6a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c95ace1bc7b31a4e7e6cbd5425f278a3b6d2bfa3cdfeec45dbcd4f1dcdbf288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhp86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4tdb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: E0312 14:49:52.768284 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.769596 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c2phv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c82e050-0168-4210-bb2d-7d8bbbc5e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T14:49:42Z\\\",\\\"message\\\":\\\"2026-03-12T14:48:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a\\\\n2026-03-12T14:48:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_098ab2ae-5172-434b-bb14-929e1a592c5a to /host/opt/cni/bin/\\\\n2026-03-12T14:48:57Z [verbose] multus-daemon started\\\\n2026-03-12T14:48:57Z [verbose] Readiness Indicator file check\\\\n2026-03-12T14:49:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmzrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c2phv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.784169 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3abc18e-3b7e-4afe-b35b-3b619290e875\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgtvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lmjrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.802010 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e67bad-c4c5-4b0f-a538-a3a5c72a6902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:48:29Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 14:48:29.184180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 14:48:29.184310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 14:48:29.184904 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3986998457/tls.crt::/tmp/serving-cert-3986998457/tls.key\\\\\\\"\\\\nI0312 14:48:29.726756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 14:48:29.730185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 14:48:29.730239 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 14:48:29.730290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 14:48:29.730318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 14:48:29.737606 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 14:48:29.737630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 14:48:29.737640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 14:48:29.737644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 14:48:29.737647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 14:48:29.737650 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 14:48:29.737829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 14:48:29.739858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.820525 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.838321 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d890b298d683f22407de0ff19203700819389f181881545f51dd87d718d1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.854755 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e819b87c82e8ab80334acb9c2c70976de4afdc09e2cee6e4b4c041a0c7a965c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3b77fc857a24af35c61089f0b55019309706c2f82090f10fa4c5fadc802e592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.870047 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.884169 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650e21b0-4cef-4068-aec8-ec9b34e19c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4309a70b7a6f2b5fc40009f5af38f1726632b0057de34b35f7a61b1816af3553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e685f56139e3b15d666f9e3fe52594fd39a9a87c3fe37995161f93901d10b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.904006 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1a5946-d90a-46ec-a25c-86027295b8e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff6af37a71ff334d7ab7ad24a97a77223ea7952329eb08d1e7e8345c907fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849c392dc7281e1c55b222a27474e17b82931f55c373a52b6069ce535fdf7b74\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T14:47:54Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 14:47:29.072639 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 14:47:29.075979 1 observer_polling.go:159] Starting file observer\\\\nI0312 14:47:29.147657 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 14:47:29.160881 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 14:47:54.299844 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 14:47:54.299949 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:47:53Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e71ce6d8cc9f654140e7b6b73d67bd7a81b80a2518b5f32feb7cc1a2a95450\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41f9611a9244b788b1601aaf55ce32bb185b05f4fb13897b9a94e8755f00a11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:47:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.918219 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26fa78e85bee9b7aba19e2747a42dd1ae038510361d8a9260b074bb7678fe3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:48:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.936020 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-72zcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5f70b0-6d75-4511-8423-e826258274d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4952487b97f1907490a00c7e16a330ea8260cbc22ec3bead3ff3b063b981ebf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39963a401d32d236e02c858f42f125479c190a6f056b5d5449899a0ffd5cbe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc3dfa02ed1ca533933c7b788a4cc589d76cd72eb526375f798e3068241820df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad7596c9e4beee3ba33ce2192c31dca801943f107cd9fe019b72c6e5796541a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f6bfcdf70d208c105c5060c32dcaf31cabd2be1ceed6766821f0be1471072c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:48:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:48:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c7294dd01e8175d4f340d6d58053c20497a9fcbd1814002514fa68cdfa72b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa6ca4479733474a1f7497b8fe4a2d030263709ed65191df5bc20b611f0056d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nfqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:48:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-72zcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:52 crc kubenswrapper[4832]: I0312 14:49:52.950357 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f191cdcc-8d3e-4f37-8cda-a312cac33177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8ed05416542ba370b5239dcc550e0077b4a52050843ef7751dc812e8794e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38bc2d755830005662e8d0bf0b5f7e1a5fad2b56513e580639124d49f6d7fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T14:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr6vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T14:49:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-65g42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:52Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:53 crc kubenswrapper[4832]: I0312 14:49:53.619716 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:53 crc kubenswrapper[4832]: I0312 14:49:53.619776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:53 crc kubenswrapper[4832]: I0312 14:49:53.619776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:53 crc kubenswrapper[4832]: E0312 14:49:53.619893 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:53 crc kubenswrapper[4832]: E0312 14:49:53.620124 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:53 crc kubenswrapper[4832]: E0312 14:49:53.620201 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.619489 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:54 crc kubenswrapper[4832]: E0312 14:49:54.619825 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.765788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.765829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.765837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.765852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.765861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:54Z","lastTransitionTime":"2026-03-12T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:54 crc kubenswrapper[4832]: E0312 14:49:54.778199 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.781590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.781627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.781638 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.781654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.781666 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:54Z","lastTransitionTime":"2026-03-12T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:54 crc kubenswrapper[4832]: E0312 14:49:54.793765 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.796942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.796985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.796997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.797016 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.797028 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:54Z","lastTransitionTime":"2026-03-12T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:54 crc kubenswrapper[4832]: E0312 14:49:54.810663 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.822409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.822472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.822492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.822549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.822568 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:54Z","lastTransitionTime":"2026-03-12T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:54 crc kubenswrapper[4832]: E0312 14:49:54.845018 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.850023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.850111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.850133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.850162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:49:54 crc kubenswrapper[4832]: I0312 14:49:54.850185 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:49:54Z","lastTransitionTime":"2026-03-12T14:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:49:54 crc kubenswrapper[4832]: E0312 14:49:54.864939 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T14:49:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c18c2a96-b70d-433d-bc7b-43bacf303c77\\\",\\\"systemUUID\\\":\\\"4e30ce0f-4c5a-4dcd-a098-48ed124d926b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T14:49:54Z is after 2025-08-24T17:21:41Z" Mar 12 14:49:54 crc kubenswrapper[4832]: E0312 14:49:54.865097 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:49:55 crc kubenswrapper[4832]: I0312 14:49:55.619698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:55 crc kubenswrapper[4832]: I0312 14:49:55.619751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:55 crc kubenswrapper[4832]: I0312 14:49:55.619721 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:55 crc kubenswrapper[4832]: E0312 14:49:55.619891 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:55 crc kubenswrapper[4832]: E0312 14:49:55.620092 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:55 crc kubenswrapper[4832]: E0312 14:49:55.620245 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:56 crc kubenswrapper[4832]: I0312 14:49:56.619615 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:56 crc kubenswrapper[4832]: E0312 14:49:56.619745 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:57 crc kubenswrapper[4832]: I0312 14:49:57.619560 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:57 crc kubenswrapper[4832]: I0312 14:49:57.619692 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:57 crc kubenswrapper[4832]: I0312 14:49:57.619900 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:57 crc kubenswrapper[4832]: E0312 14:49:57.620050 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:49:57 crc kubenswrapper[4832]: E0312 14:49:57.620624 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:57 crc kubenswrapper[4832]: E0312 14:49:57.620722 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:57 crc kubenswrapper[4832]: E0312 14:49:57.769938 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:49:58 crc kubenswrapper[4832]: I0312 14:49:58.619121 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:49:58 crc kubenswrapper[4832]: E0312 14:49:58.619330 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:49:59 crc kubenswrapper[4832]: I0312 14:49:59.619150 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:49:59 crc kubenswrapper[4832]: I0312 14:49:59.619221 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:49:59 crc kubenswrapper[4832]: I0312 14:49:59.619228 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:49:59 crc kubenswrapper[4832]: E0312 14:49:59.619993 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:49:59 crc kubenswrapper[4832]: E0312 14:49:59.620127 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:49:59 crc kubenswrapper[4832]: E0312 14:49:59.619781 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:00 crc kubenswrapper[4832]: I0312 14:50:00.619542 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:00 crc kubenswrapper[4832]: E0312 14:50:00.619727 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:01 crc kubenswrapper[4832]: I0312 14:50:01.619064 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:01 crc kubenswrapper[4832]: I0312 14:50:01.619105 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:01 crc kubenswrapper[4832]: I0312 14:50:01.619083 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:01 crc kubenswrapper[4832]: E0312 14:50:01.619423 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:01 crc kubenswrapper[4832]: E0312 14:50:01.619692 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:01 crc kubenswrapper[4832]: E0312 14:50:01.619830 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.619377 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:02 crc kubenswrapper[4832]: E0312 14:50:02.619617 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.651404 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.651386124 podStartE2EDuration="1m16.651386124s" podCreationTimestamp="2026-03-12 14:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.651376873 +0000 UTC m=+161.295391159" watchObservedRunningTime="2026-03-12 14:50:02.651386124 +0000 UTC m=+161.295400360" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.727178 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=68.727159884 podStartE2EDuration="1m8.727159884s" podCreationTimestamp="2026-03-12 14:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.727090812 +0000 UTC m=+161.371105068" watchObservedRunningTime="2026-03-12 14:50:02.727159884 +0000 UTC m=+161.371174110" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.748797 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=28.74877677 podStartE2EDuration="28.74877677s" podCreationTimestamp="2026-03-12 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.74843632 +0000 UTC m=+161.392450546" watchObservedRunningTime="2026-03-12 14:50:02.74877677 +0000 UTC m=+161.392791006" Mar 12 14:50:02 crc kubenswrapper[4832]: E0312 14:50:02.770717 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.785723 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-72zcf" podStartSLOduration=96.785705443 podStartE2EDuration="1m36.785705443s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.783377007 +0000 UTC m=+161.427391233" watchObservedRunningTime="2026-03-12 14:50:02.785705443 +0000 UTC m=+161.429719659" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.809522 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ssdc8" podStartSLOduration=97.809485881 podStartE2EDuration="1m37.809485881s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.809403059 +0000 UTC m=+161.453417355" watchObservedRunningTime="2026-03-12 14:50:02.809485881 +0000 UTC m=+161.453500107" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.809822 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-65g42" podStartSLOduration=96.80981616 podStartE2EDuration="1m36.80981616s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.797679374 +0000 UTC m=+161.441693600" watchObservedRunningTime="2026-03-12 14:50:02.80981616 +0000 UTC m=+161.453830386" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.832050 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=48.832029254 podStartE2EDuration="48.832029254s" podCreationTimestamp="2026-03-12 14:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.830570312 +0000 UTC m=+161.474584568" watchObservedRunningTime="2026-03-12 14:50:02.832029254 +0000 UTC m=+161.476043490" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.844617 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.844599292 podStartE2EDuration="39.844599292s" podCreationTimestamp="2026-03-12 14:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.84381311 +0000 UTC m=+161.487827336" watchObservedRunningTime="2026-03-12 14:50:02.844599292 +0000 UTC m=+161.488613538" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.886788 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p4tdb" podStartSLOduration=97.886767954 podStartE2EDuration="1m37.886767954s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.87152419 +0000 UTC m=+161.515538416" watchObservedRunningTime="2026-03-12 14:50:02.886767954 +0000 UTC m=+161.530782180" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.887342 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-c2phv" podStartSLOduration=96.8873381 podStartE2EDuration="1m36.8873381s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.88731963 +0000 UTC m=+161.531333856" watchObservedRunningTime="2026-03-12 14:50:02.8873381 +0000 UTC m=+161.531352326" Mar 12 14:50:02 crc kubenswrapper[4832]: I0312 14:50:02.898717 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podStartSLOduration=96.898705895 podStartE2EDuration="1m36.898705895s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.898399656 +0000 UTC m=+161.542413882" watchObservedRunningTime="2026-03-12 14:50:02.898705895 +0000 UTC m=+161.542720121" Mar 12 14:50:03 crc kubenswrapper[4832]: I0312 14:50:03.618713 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:03 crc kubenswrapper[4832]: I0312 14:50:03.618849 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:03 crc kubenswrapper[4832]: E0312 14:50:03.618918 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:03 crc kubenswrapper[4832]: E0312 14:50:03.619032 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:03 crc kubenswrapper[4832]: I0312 14:50:03.618725 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:03 crc kubenswrapper[4832]: E0312 14:50:03.619152 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:04 crc kubenswrapper[4832]: I0312 14:50:04.618939 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:04 crc kubenswrapper[4832]: E0312 14:50:04.619182 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.221387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.221444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.221461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.221483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.221501 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T14:50:05Z","lastTransitionTime":"2026-03-12T14:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.278643 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk"] Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.279304 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.282004 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.282362 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.282607 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.283979 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.305622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9a8deac3-5a42-4548-81c2-427f886fed96-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.305733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9a8deac3-5a42-4548-81c2-427f886fed96-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.305781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8deac3-5a42-4548-81c2-427f886fed96-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.305811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a8deac3-5a42-4548-81c2-427f886fed96-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.305845 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a8deac3-5a42-4548-81c2-427f886fed96-service-ca\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.407368 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9a8deac3-5a42-4548-81c2-427f886fed96-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.407451 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8deac3-5a42-4548-81c2-427f886fed96-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.407477 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9a8deac3-5a42-4548-81c2-427f886fed96-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.407501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a8deac3-5a42-4548-81c2-427f886fed96-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.407659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a8deac3-5a42-4548-81c2-427f886fed96-service-ca\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.407742 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9a8deac3-5a42-4548-81c2-427f886fed96-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.407848 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9a8deac3-5a42-4548-81c2-427f886fed96-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.409255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a8deac3-5a42-4548-81c2-427f886fed96-service-ca\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.417207 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8deac3-5a42-4548-81c2-427f886fed96-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.435293 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a8deac3-5a42-4548-81c2-427f886fed96-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-48qdk\" (UID: \"9a8deac3-5a42-4548-81c2-427f886fed96\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.597124 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.619011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.620886 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.620896 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:05 crc kubenswrapper[4832]: E0312 14:50:05.621097 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:05 crc kubenswrapper[4832]: E0312 14:50:05.621055 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:05 crc kubenswrapper[4832]: E0312 14:50:05.621319 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.675222 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 12 14:50:05 crc kubenswrapper[4832]: I0312 14:50:05.686085 4832 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 14:50:06 crc kubenswrapper[4832]: I0312 14:50:06.440370 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" event={"ID":"9a8deac3-5a42-4548-81c2-427f886fed96","Type":"ContainerStarted","Data":"80ab3afe41d51a21a6097e6139961f4b1dacd7d82a6a1deffe3fa5b7bcd17a72"} Mar 12 14:50:06 crc kubenswrapper[4832]: I0312 14:50:06.440831 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" event={"ID":"9a8deac3-5a42-4548-81c2-427f886fed96","Type":"ContainerStarted","Data":"bf22980400bc528ef6b16ee44ddc901aafe63f1dc5e36b3629ffbc21ec83b20c"} Mar 12 14:50:06 crc kubenswrapper[4832]: I0312 14:50:06.456008 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-48qdk" podStartSLOduration=100.455987283 podStartE2EDuration="1m40.455987283s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:06.455365646 +0000 UTC m=+165.099379902" watchObservedRunningTime="2026-03-12 14:50:06.455987283 +0000 UTC m=+165.100001509" Mar 12 14:50:06 crc kubenswrapper[4832]: I0312 14:50:06.619744 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:06 crc kubenswrapper[4832]: E0312 14:50:06.620239 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:06 crc kubenswrapper[4832]: I0312 14:50:06.620404 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 14:50:06 crc kubenswrapper[4832]: E0312 14:50:06.620570 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:50:07 crc kubenswrapper[4832]: I0312 14:50:07.619276 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:07 crc kubenswrapper[4832]: I0312 14:50:07.619376 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:07 crc kubenswrapper[4832]: E0312 14:50:07.619428 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:07 crc kubenswrapper[4832]: I0312 14:50:07.619386 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:07 crc kubenswrapper[4832]: E0312 14:50:07.619530 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:07 crc kubenswrapper[4832]: E0312 14:50:07.619703 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:07 crc kubenswrapper[4832]: E0312 14:50:07.772534 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:08 crc kubenswrapper[4832]: I0312 14:50:08.618776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:08 crc kubenswrapper[4832]: E0312 14:50:08.618964 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:09 crc kubenswrapper[4832]: I0312 14:50:09.618888 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:09 crc kubenswrapper[4832]: I0312 14:50:09.618981 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:09 crc kubenswrapper[4832]: E0312 14:50:09.619224 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:09 crc kubenswrapper[4832]: E0312 14:50:09.619010 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:09 crc kubenswrapper[4832]: I0312 14:50:09.618981 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:09 crc kubenswrapper[4832]: E0312 14:50:09.619446 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:10 crc kubenswrapper[4832]: I0312 14:50:10.619039 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:10 crc kubenswrapper[4832]: E0312 14:50:10.619224 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:11 crc kubenswrapper[4832]: I0312 14:50:11.619041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:11 crc kubenswrapper[4832]: I0312 14:50:11.619094 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:11 crc kubenswrapper[4832]: I0312 14:50:11.619045 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:11 crc kubenswrapper[4832]: E0312 14:50:11.619144 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:11 crc kubenswrapper[4832]: E0312 14:50:11.619232 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:11 crc kubenswrapper[4832]: E0312 14:50:11.619317 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:12 crc kubenswrapper[4832]: I0312 14:50:12.618926 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:12 crc kubenswrapper[4832]: E0312 14:50:12.620784 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:12 crc kubenswrapper[4832]: E0312 14:50:12.773113 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:13 crc kubenswrapper[4832]: I0312 14:50:13.618922 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:13 crc kubenswrapper[4832]: I0312 14:50:13.619014 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:13 crc kubenswrapper[4832]: E0312 14:50:13.619116 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:13 crc kubenswrapper[4832]: I0312 14:50:13.618964 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:13 crc kubenswrapper[4832]: E0312 14:50:13.619337 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:13 crc kubenswrapper[4832]: E0312 14:50:13.619578 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:14 crc kubenswrapper[4832]: I0312 14:50:14.126297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:14 crc kubenswrapper[4832]: E0312 14:50:14.126535 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:50:14 crc kubenswrapper[4832]: E0312 14:50:14.126614 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs podName:c3abc18e-3b7e-4afe-b35b-3b619290e875 nodeName:}" failed. No retries permitted until 2026-03-12 14:51:18.126592382 +0000 UTC m=+236.770606618 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs") pod "network-metrics-daemon-lmjrb" (UID: "c3abc18e-3b7e-4afe-b35b-3b619290e875") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:50:14 crc kubenswrapper[4832]: I0312 14:50:14.619065 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:14 crc kubenswrapper[4832]: E0312 14:50:14.619246 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:15 crc kubenswrapper[4832]: I0312 14:50:15.618758 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:15 crc kubenswrapper[4832]: I0312 14:50:15.618824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:15 crc kubenswrapper[4832]: I0312 14:50:15.618900 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:15 crc kubenswrapper[4832]: E0312 14:50:15.618958 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:15 crc kubenswrapper[4832]: E0312 14:50:15.619094 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:15 crc kubenswrapper[4832]: E0312 14:50:15.619409 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:16 crc kubenswrapper[4832]: I0312 14:50:16.619551 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:16 crc kubenswrapper[4832]: E0312 14:50:16.619889 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:17 crc kubenswrapper[4832]: I0312 14:50:17.619669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:17 crc kubenswrapper[4832]: I0312 14:50:17.619730 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:17 crc kubenswrapper[4832]: E0312 14:50:17.619866 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:17 crc kubenswrapper[4832]: I0312 14:50:17.619669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:17 crc kubenswrapper[4832]: E0312 14:50:17.620036 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:17 crc kubenswrapper[4832]: E0312 14:50:17.620151 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:17 crc kubenswrapper[4832]: E0312 14:50:17.774784 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:18 crc kubenswrapper[4832]: I0312 14:50:18.619316 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:18 crc kubenswrapper[4832]: E0312 14:50:18.619549 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:19 crc kubenswrapper[4832]: I0312 14:50:19.618753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:19 crc kubenswrapper[4832]: I0312 14:50:19.618805 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:19 crc kubenswrapper[4832]: I0312 14:50:19.618897 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:19 crc kubenswrapper[4832]: E0312 14:50:19.618922 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:19 crc kubenswrapper[4832]: E0312 14:50:19.619080 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:19 crc kubenswrapper[4832]: E0312 14:50:19.619256 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:20 crc kubenswrapper[4832]: I0312 14:50:20.619472 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:20 crc kubenswrapper[4832]: E0312 14:50:20.619685 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:20 crc kubenswrapper[4832]: I0312 14:50:20.620887 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 14:50:20 crc kubenswrapper[4832]: E0312 14:50:20.621167 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5zjpx_openshift-ovn-kubernetes(18cc235e-1890-485d-8ca2-bf03b2006ab9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" Mar 12 14:50:21 crc kubenswrapper[4832]: I0312 14:50:21.618821 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:21 crc kubenswrapper[4832]: I0312 14:50:21.618881 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:21 crc kubenswrapper[4832]: E0312 14:50:21.618945 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:21 crc kubenswrapper[4832]: I0312 14:50:21.618890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:21 crc kubenswrapper[4832]: E0312 14:50:21.619102 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:21 crc kubenswrapper[4832]: E0312 14:50:21.619188 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:22 crc kubenswrapper[4832]: I0312 14:50:22.621895 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:22 crc kubenswrapper[4832]: E0312 14:50:22.622169 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:22 crc kubenswrapper[4832]: E0312 14:50:22.775718 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:23 crc kubenswrapper[4832]: I0312 14:50:23.619684 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:23 crc kubenswrapper[4832]: I0312 14:50:23.619737 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:23 crc kubenswrapper[4832]: I0312 14:50:23.619746 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:23 crc kubenswrapper[4832]: E0312 14:50:23.619966 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:23 crc kubenswrapper[4832]: E0312 14:50:23.620181 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:23 crc kubenswrapper[4832]: E0312 14:50:23.620344 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:24 crc kubenswrapper[4832]: I0312 14:50:24.619391 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:24 crc kubenswrapper[4832]: E0312 14:50:24.619653 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:25 crc kubenswrapper[4832]: I0312 14:50:25.619181 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:25 crc kubenswrapper[4832]: I0312 14:50:25.619298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:25 crc kubenswrapper[4832]: I0312 14:50:25.619181 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:25 crc kubenswrapper[4832]: E0312 14:50:25.619327 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:25 crc kubenswrapper[4832]: E0312 14:50:25.619584 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:25 crc kubenswrapper[4832]: E0312 14:50:25.619727 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:26 crc kubenswrapper[4832]: I0312 14:50:26.619747 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:26 crc kubenswrapper[4832]: E0312 14:50:26.619926 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:27 crc kubenswrapper[4832]: I0312 14:50:27.619384 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:27 crc kubenswrapper[4832]: I0312 14:50:27.619422 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:27 crc kubenswrapper[4832]: E0312 14:50:27.619585 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:27 crc kubenswrapper[4832]: I0312 14:50:27.619391 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:27 crc kubenswrapper[4832]: E0312 14:50:27.619783 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:27 crc kubenswrapper[4832]: E0312 14:50:27.619922 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:27 crc kubenswrapper[4832]: E0312 14:50:27.783624 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:28 crc kubenswrapper[4832]: I0312 14:50:28.619081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:28 crc kubenswrapper[4832]: E0312 14:50:28.619230 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.529125 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/1.log" Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.530084 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/0.log" Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.530184 4832 generic.go:334] "Generic (PLEG): container finished" podID="7c82e050-0168-4210-bb2d-7d8bbbc5e74e" containerID="3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0" exitCode=1 Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.530243 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerDied","Data":"3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0"} Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.530318 4832 scope.go:117] "RemoveContainer" containerID="b5682ec693643f728ba90117ec77a64c0bdcc9f5d3134970c00ace46c49ba540" Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.532840 4832 scope.go:117] "RemoveContainer" containerID="3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0" Mar 12 14:50:29 crc kubenswrapper[4832]: E0312 14:50:29.533210 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-c2phv_openshift-multus(7c82e050-0168-4210-bb2d-7d8bbbc5e74e)\"" pod="openshift-multus/multus-c2phv" podUID="7c82e050-0168-4210-bb2d-7d8bbbc5e74e" Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.618866 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:29 crc kubenswrapper[4832]: E0312 14:50:29.619019 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.618889 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:29 crc kubenswrapper[4832]: E0312 14:50:29.619119 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:29 crc kubenswrapper[4832]: I0312 14:50:29.618866 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:29 crc kubenswrapper[4832]: E0312 14:50:29.619188 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:30 crc kubenswrapper[4832]: I0312 14:50:30.538088 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/1.log" Mar 12 14:50:30 crc kubenswrapper[4832]: I0312 14:50:30.619195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:30 crc kubenswrapper[4832]: E0312 14:50:30.619424 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:31 crc kubenswrapper[4832]: I0312 14:50:31.619739 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:31 crc kubenswrapper[4832]: I0312 14:50:31.619841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:31 crc kubenswrapper[4832]: I0312 14:50:31.619886 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:31 crc kubenswrapper[4832]: E0312 14:50:31.620490 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:31 crc kubenswrapper[4832]: E0312 14:50:31.620656 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:31 crc kubenswrapper[4832]: E0312 14:50:31.620701 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:32 crc kubenswrapper[4832]: I0312 14:50:32.618899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:32 crc kubenswrapper[4832]: E0312 14:50:32.620767 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:32 crc kubenswrapper[4832]: E0312 14:50:32.784245 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:33 crc kubenswrapper[4832]: I0312 14:50:33.618842 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:33 crc kubenswrapper[4832]: E0312 14:50:33.619001 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:33 crc kubenswrapper[4832]: I0312 14:50:33.618836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:33 crc kubenswrapper[4832]: E0312 14:50:33.619242 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:33 crc kubenswrapper[4832]: I0312 14:50:33.619363 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:33 crc kubenswrapper[4832]: E0312 14:50:33.619450 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:34 crc kubenswrapper[4832]: I0312 14:50:34.619600 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:34 crc kubenswrapper[4832]: E0312 14:50:34.619790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:35 crc kubenswrapper[4832]: I0312 14:50:35.619067 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:35 crc kubenswrapper[4832]: I0312 14:50:35.619223 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:35 crc kubenswrapper[4832]: I0312 14:50:35.619116 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:35 crc kubenswrapper[4832]: E0312 14:50:35.619338 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:35 crc kubenswrapper[4832]: E0312 14:50:35.619453 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:35 crc kubenswrapper[4832]: E0312 14:50:35.620070 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:35 crc kubenswrapper[4832]: I0312 14:50:35.620697 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 14:50:36 crc kubenswrapper[4832]: I0312 14:50:36.486878 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lmjrb"] Mar 12 14:50:36 crc kubenswrapper[4832]: I0312 14:50:36.561346 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/3.log" Mar 12 14:50:36 crc kubenswrapper[4832]: I0312 14:50:36.564403 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerStarted","Data":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} Mar 12 14:50:36 crc kubenswrapper[4832]: I0312 14:50:36.564433 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:36 crc kubenswrapper[4832]: E0312 14:50:36.564587 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:36 crc kubenswrapper[4832]: I0312 14:50:36.564929 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:50:36 crc kubenswrapper[4832]: I0312 14:50:36.591350 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podStartSLOduration=130.591326275 podStartE2EDuration="2m10.591326275s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:36.590595004 +0000 UTC m=+195.234609250" watchObservedRunningTime="2026-03-12 14:50:36.591326275 +0000 UTC m=+195.235340511" Mar 12 14:50:36 crc kubenswrapper[4832]: I0312 14:50:36.619586 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:36 crc kubenswrapper[4832]: E0312 14:50:36.619721 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:37 crc kubenswrapper[4832]: I0312 14:50:37.619203 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:37 crc kubenswrapper[4832]: I0312 14:50:37.619263 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:37 crc kubenswrapper[4832]: E0312 14:50:37.619407 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:37 crc kubenswrapper[4832]: E0312 14:50:37.619566 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:37 crc kubenswrapper[4832]: E0312 14:50:37.786824 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:38 crc kubenswrapper[4832]: I0312 14:50:38.619559 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:38 crc kubenswrapper[4832]: I0312 14:50:38.619614 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:38 crc kubenswrapper[4832]: E0312 14:50:38.620186 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:38 crc kubenswrapper[4832]: E0312 14:50:38.620390 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:39 crc kubenswrapper[4832]: I0312 14:50:39.619407 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:39 crc kubenswrapper[4832]: I0312 14:50:39.619546 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:39 crc kubenswrapper[4832]: E0312 14:50:39.619661 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:39 crc kubenswrapper[4832]: E0312 14:50:39.619824 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:40 crc kubenswrapper[4832]: I0312 14:50:40.619355 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:40 crc kubenswrapper[4832]: E0312 14:50:40.619581 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:40 crc kubenswrapper[4832]: I0312 14:50:40.619788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:40 crc kubenswrapper[4832]: E0312 14:50:40.620082 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:41 crc kubenswrapper[4832]: I0312 14:50:41.619776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:41 crc kubenswrapper[4832]: I0312 14:50:41.619849 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:41 crc kubenswrapper[4832]: E0312 14:50:41.619965 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:41 crc kubenswrapper[4832]: E0312 14:50:41.620129 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:42 crc kubenswrapper[4832]: I0312 14:50:42.618902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:42 crc kubenswrapper[4832]: I0312 14:50:42.618902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:42 crc kubenswrapper[4832]: E0312 14:50:42.619821 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:42 crc kubenswrapper[4832]: E0312 14:50:42.620197 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:42 crc kubenswrapper[4832]: E0312 14:50:42.789924 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 14:50:43 crc kubenswrapper[4832]: I0312 14:50:43.619130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:43 crc kubenswrapper[4832]: I0312 14:50:43.619173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:43 crc kubenswrapper[4832]: E0312 14:50:43.619297 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:43 crc kubenswrapper[4832]: E0312 14:50:43.619389 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:44 crc kubenswrapper[4832]: I0312 14:50:44.618888 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:44 crc kubenswrapper[4832]: I0312 14:50:44.618899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:44 crc kubenswrapper[4832]: E0312 14:50:44.619257 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:44 crc kubenswrapper[4832]: E0312 14:50:44.619021 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:44 crc kubenswrapper[4832]: I0312 14:50:44.620043 4832 scope.go:117] "RemoveContainer" containerID="3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0" Mar 12 14:50:45 crc kubenswrapper[4832]: I0312 14:50:45.602112 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/1.log" Mar 12 14:50:45 crc kubenswrapper[4832]: I0312 14:50:45.602591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerStarted","Data":"1c43e6d8173102d19ca758d7fb313a4e1c96a4f798e4602c17d35e077db030cd"} Mar 12 14:50:45 crc kubenswrapper[4832]: I0312 14:50:45.619126 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:45 crc kubenswrapper[4832]: I0312 14:50:45.619184 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:45 crc kubenswrapper[4832]: E0312 14:50:45.619282 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:45 crc kubenswrapper[4832]: E0312 14:50:45.619387 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:46 crc kubenswrapper[4832]: I0312 14:50:46.589025 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.589240 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:52:48.589199668 +0000 UTC m=+327.233213934 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:46 crc kubenswrapper[4832]: I0312 14:50:46.619722 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.619938 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lmjrb" podUID="c3abc18e-3b7e-4afe-b35b-3b619290e875" Mar 12 14:50:46 crc kubenswrapper[4832]: I0312 14:50:46.620322 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.620405 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 14:50:46 crc kubenswrapper[4832]: I0312 14:50:46.691039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:46 crc kubenswrapper[4832]: I0312 14:50:46.691132 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:46 crc kubenswrapper[4832]: I0312 14:50:46.691175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:46 crc kubenswrapper[4832]: I0312 14:50:46.691252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691341 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691465 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:52:48.691434765 +0000 UTC m=+327.335449031 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691488 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691561 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691593 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691709 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 14:52:48.691680842 +0000 UTC m=+327.335695108 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691743 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691769 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691785 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691785 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691828 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 14:52:48.691814556 +0000 UTC m=+327.335828792 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:50:46 crc kubenswrapper[4832]: E0312 14:50:46.691851 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 14:52:48.691840527 +0000 UTC m=+327.335854763 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 14:50:47 crc kubenswrapper[4832]: I0312 14:50:47.618762 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:47 crc kubenswrapper[4832]: I0312 14:50:47.618814 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:47 crc kubenswrapper[4832]: E0312 14:50:47.618975 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:50:47 crc kubenswrapper[4832]: E0312 14:50:47.619153 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 14:50:48 crc kubenswrapper[4832]: I0312 14:50:48.619101 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:50:48 crc kubenswrapper[4832]: I0312 14:50:48.619101 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:50:48 crc kubenswrapper[4832]: I0312 14:50:48.622138 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 14:50:48 crc kubenswrapper[4832]: I0312 14:50:48.622169 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 14:50:48 crc kubenswrapper[4832]: I0312 14:50:48.622463 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 14:50:48 crc kubenswrapper[4832]: I0312 14:50:48.622756 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 14:50:49 crc kubenswrapper[4832]: I0312 14:50:49.619425 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:50:49 crc kubenswrapper[4832]: I0312 14:50:49.619474 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:50:49 crc kubenswrapper[4832]: I0312 14:50:49.622207 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 14:50:49 crc kubenswrapper[4832]: I0312 14:50:49.623276 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.202296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.252126 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8x4hd"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.252708 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.257601 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.258097 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.259630 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lgx9r"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.260359 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.266809 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xfg24"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.267860 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.271268 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.282795 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hz5vn"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.283319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hz5vn" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.283836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.283977 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.284532 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.287274 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.289606 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5g46q"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.290049 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.290297 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.290374 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.290299 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.290691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.290473 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.290531 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.291274 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.291953 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.292396 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.292586 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.293037 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86f5t"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.293536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.315625 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.315677 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.321462 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.323231 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4tg76"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.323926 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.324043 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.324602 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.330095 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.330439 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.334731 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.338795 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.341566 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.341717 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.345943 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.346325 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.346552 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.346833 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.347024 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.347252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.347604 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.347841 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.348036 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.348218 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.350272 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.358524 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.364896 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.365346 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.384060 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.384496 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.386367 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2jc2t"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.386996 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.387008 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.387104 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.387205 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.387210 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.387770 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.387950 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.387997 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.388103 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.388127 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.388383 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.388318 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.388425 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.388301 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.396337 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.396465 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.396606 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.396791 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.396923 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.397481 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.397592 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.397807 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.397866 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398139 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398277 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398401 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398549 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398572 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398649 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398826 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398955 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.398979 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399056 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399142 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399247 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399279 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399346 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399426 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399431 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399551 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399588 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399802 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.399908 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.400711 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.400808 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.400919 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.401119 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.401191 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.401233 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.401312 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.401386 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405406 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405435 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405454 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780e312c-4f87-40d8-b146-0bcefe9c9c89-config\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-etcd-client\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-trusted-ca\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405772 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t6v8\" (UniqueName: \"kubernetes.io/projected/fc57df00-709c-4cee-9d19-a00dca7d58da-kube-api-access-4t6v8\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.405812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-config\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.406059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.406087 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntms8\" (UniqueName: \"kubernetes.io/projected/3616a4cb-ef1d-4125-b880-5b1486eb1d55-kube-api-access-ntms8\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.406211 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.406237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-policies\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.406300 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.406901 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.406906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.407240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wgbn\" (UniqueName: \"kubernetes.io/projected/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-kube-api-access-6wgbn\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425669 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-config\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425715 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z846d\" (UniqueName: \"kubernetes.io/projected/d80173e6-eb6e-4671-b61c-f223b0f3dc24-kube-api-access-z846d\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425740 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1276d8a9-5af1-4a3f-a61c-255ed424ee88-serving-cert\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-audit\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-config\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425941 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-config\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425963 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/780e312c-4f87-40d8-b146-0bcefe9c9c89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.425985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86719732-2809-4511-8e2c-9fb82df5c4bc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426218 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616a4cb-ef1d-4125-b880-5b1486eb1d55-config\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqcjf\" (UniqueName: \"kubernetes.io/projected/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-kube-api-access-nqcjf\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426400 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-client-ca\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426465 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426646 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-audit-policies\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3616a4cb-ef1d-4125-b880-5b1486eb1d55-machine-approver-tls\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426701 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/780e312c-4f87-40d8-b146-0bcefe9c9c89-images\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3616a4cb-ef1d-4125-b880-5b1486eb1d55-auth-proxy-config\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-etcd-client\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.426988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl4qb\" (UniqueName: \"kubernetes.io/projected/0e48a27d-76e1-45f3-87af-c9b306291d25-kube-api-access-zl4qb\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427188 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zr7\" (UniqueName: \"kubernetes.io/projected/1276d8a9-5af1-4a3f-a61c-255ed424ee88-kube-api-access-f9zr7\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-dir\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427267 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d80173e6-eb6e-4671-b61c-f223b0f3dc24-audit-dir\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427298 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-encryption-config\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427561 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-encryption-config\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-serving-cert\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427664 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9bd599-747b-470d-941b-fe7d6ee15be1-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-etcd-serving-ca\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-image-import-ca\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.427958 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86719732-2809-4511-8e2c-9fb82df5c4bc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24bk\" (UniqueName: \"kubernetes.io/projected/5d4068e0-53ed-433d-9657-ff75730d43a6-kube-api-access-r24bk\") pod \"downloads-7954f5f757-hz5vn\" (UID: \"5d4068e0-53ed-433d-9657-ff75730d43a6\") " pod="openshift-console/downloads-7954f5f757-hz5vn" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428383 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-serving-cert\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428420 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-serving-cert\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428454 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fc57df00-709c-4cee-9d19-a00dca7d58da-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428668 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdxz\" (UniqueName: \"kubernetes.io/projected/86719732-2809-4511-8e2c-9fb82df5c4bc-kube-api-access-dqdxz\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428699 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-audit-dir\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428877 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-client-ca\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mg44\" (UniqueName: \"kubernetes.io/projected/ef9bd599-747b-470d-941b-fe7d6ee15be1-kube-api-access-8mg44\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.428948 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.429061 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577sv\" (UniqueName: \"kubernetes.io/projected/780e312c-4f87-40d8-b146-0bcefe9c9c89-kube-api-access-577sv\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.429095 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc57df00-709c-4cee-9d19-a00dca7d58da-serving-cert\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.429124 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d80173e6-eb6e-4671-b61c-f223b0f3dc24-node-pullsecrets\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.429232 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.432941 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.435476 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.448049 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.448372 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.449929 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.450293 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.451041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.451454 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.452326 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.452803 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.453033 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.453120 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.453623 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.454282 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.454328 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.454863 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.454983 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.455081 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-42j9g"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.455688 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.456050 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.458131 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.459019 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.459425 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.459536 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.459710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.460074 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.460216 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.460339 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.460430 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.461398 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.461594 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.462499 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.462764 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.463000 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.463230 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.464109 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.464488 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.469400 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555450-w9jtm"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.471464 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.473272 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.473890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.474588 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.475123 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.476316 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.478585 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ph4hc"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.479204 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-987hp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.479350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.479463 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.482330 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.482786 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.484346 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n84cj"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.485388 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.486835 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.487841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.490070 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.490450 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.491123 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.491719 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.494618 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.498948 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.499896 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.501218 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.502129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.502688 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gjmmz"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.503716 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.505228 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5h6nt"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.506056 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.510027 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.510726 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.510853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.511289 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.511319 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.512060 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.513532 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jthtj"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.513951 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.514860 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.515012 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hz5vn"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.517522 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8x4hd"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.518989 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xfg24"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.520438 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-svx7c"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.521190 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.522666 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.526125 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5g46q"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.527222 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.528587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2jc2t"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.529541 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.529834 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-client-ca\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.529887 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.529929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mg44\" (UniqueName: \"kubernetes.io/projected/ef9bd599-747b-470d-941b-fe7d6ee15be1-kube-api-access-8mg44\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.529952 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-client\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.529976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d80173e6-eb6e-4671-b61c-f223b0f3dc24-node-pullsecrets\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-577sv\" (UniqueName: \"kubernetes.io/projected/780e312c-4f87-40d8-b146-0bcefe9c9c89-kube-api-access-577sv\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530053 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc57df00-709c-4cee-9d19-a00dca7d58da-serving-cert\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-config\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530152 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530195 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780e312c-4f87-40d8-b146-0bcefe9c9c89-config\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-etcd-client\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530256 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-trusted-ca\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530294 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t6v8\" (UniqueName: \"kubernetes.io/projected/fc57df00-709c-4cee-9d19-a00dca7d58da-kube-api-access-4t6v8\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530317 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-config\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530358 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntms8\" (UniqueName: \"kubernetes.io/projected/3616a4cb-ef1d-4125-b880-5b1486eb1d55-kube-api-access-ntms8\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530397 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-ca\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530420 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z846d\" (UniqueName: \"kubernetes.io/projected/d80173e6-eb6e-4671-b61c-f223b0f3dc24-kube-api-access-z846d\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d80173e6-eb6e-4671-b61c-f223b0f3dc24-node-pullsecrets\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530469 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1276d8a9-5af1-4a3f-a61c-255ed424ee88-serving-cert\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530489 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-policies\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530525 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530545 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530570 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wgbn\" (UniqueName: \"kubernetes.io/projected/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-kube-api-access-6wgbn\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-config\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-audit\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-config\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-config\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/780e312c-4f87-40d8-b146-0bcefe9c9c89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616a4cb-ef1d-4125-b880-5b1486eb1d55-config\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530768 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86719732-2809-4511-8e2c-9fb82df5c4bc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530788 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2442d67-5fd4-4cde-bf16-afc8b174b487-serving-cert\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530809 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqcjf\" (UniqueName: \"kubernetes.io/projected/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-kube-api-access-nqcjf\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530827 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.530852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-client-ca\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.531311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-client-ca\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.531341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.531386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-audit-policies\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.531402 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-config\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780e312c-4f87-40d8-b146-0bcefe9c9c89-config\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-audit\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.531408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3616a4cb-ef1d-4125-b880-5b1486eb1d55-machine-approver-tls\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532610 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/780e312c-4f87-40d8-b146-0bcefe9c9c89-images\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532647 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3616a4cb-ef1d-4125-b880-5b1486eb1d55-auth-proxy-config\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl4qb\" (UniqueName: \"kubernetes.io/projected/0e48a27d-76e1-45f3-87af-c9b306291d25-kube-api-access-zl4qb\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532708 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-etcd-client\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532749 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-policies\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-config\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zr7\" (UniqueName: \"kubernetes.io/projected/1276d8a9-5af1-4a3f-a61c-255ed424ee88-kube-api-access-f9zr7\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9br9b\" (UniqueName: \"kubernetes.io/projected/4cf9358d-29c3-4296-9ed7-740163adbcb8-kube-api-access-9br9b\") pod \"migrator-59844c95c7-wj5jt\" (UID: \"4cf9358d-29c3-4296-9ed7-740163adbcb8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532835 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-dir\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d80173e6-eb6e-4671-b61c-f223b0f3dc24-audit-dir\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwcc\" (UniqueName: \"kubernetes.io/projected/b2442d67-5fd4-4cde-bf16-afc8b174b487-kube-api-access-lrwcc\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532915 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-encryption-config\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-encryption-config\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-serving-cert\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.532993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9bd599-747b-470d-941b-fe7d6ee15be1-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533035 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-etcd-serving-ca\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533058 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-image-import-ca\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533085 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86719732-2809-4511-8e2c-9fb82df5c4bc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r24bk\" (UniqueName: \"kubernetes.io/projected/5d4068e0-53ed-433d-9657-ff75730d43a6-kube-api-access-r24bk\") pod \"downloads-7954f5f757-hz5vn\" (UID: \"5d4068e0-53ed-433d-9657-ff75730d43a6\") " pod="openshift-console/downloads-7954f5f757-hz5vn" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533154 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-serving-cert\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533171 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-serving-cert\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533207 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fc57df00-709c-4cee-9d19-a00dca7d58da-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533224 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-service-ca\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdxz\" (UniqueName: \"kubernetes.io/projected/86719732-2809-4511-8e2c-9fb82df5c4bc-kube-api-access-dqdxz\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533266 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-audit-dir\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533345 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-audit-dir\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533418 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-config\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.533809 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/780e312c-4f87-40d8-b146-0bcefe9c9c89-images\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.534111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-trusted-ca\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.534423 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-config\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.534579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3616a4cb-ef1d-4125-b880-5b1486eb1d55-auth-proxy-config\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.535766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.535912 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.535950 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86f5t"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.535962 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.536323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.536664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-dir\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.537075 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d80173e6-eb6e-4671-b61c-f223b0f3dc24-audit-dir\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.537540 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3616a4cb-ef1d-4125-b880-5b1486eb1d55-machine-approver-tls\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.538017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.538290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86719732-2809-4511-8e2c-9fb82df5c4bc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.538370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.538441 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-etcd-client\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.538693 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fc57df00-709c-4cee-9d19-a00dca7d58da-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.539141 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.539342 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.539979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.539990 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616a4cb-ef1d-4125-b880-5b1486eb1d55-config\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.540084 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-etcd-serving-ca\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.540594 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d80173e6-eb6e-4671-b61c-f223b0f3dc24-image-import-ca\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.540660 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ph4hc"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.540672 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-client-ca\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.541283 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-audit-policies\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.541535 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-serving-cert\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.541623 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-encryption-config\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.541662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.541625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.541918 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.542211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc57df00-709c-4cee-9d19-a00dca7d58da-serving-cert\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.542359 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1276d8a9-5af1-4a3f-a61c-255ed424ee88-serving-cert\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.542388 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.542466 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.542953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-serving-cert\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.543242 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.543869 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.543916 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.544229 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86719732-2809-4511-8e2c-9fb82df5c4bc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.544456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.544904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/780e312c-4f87-40d8-b146-0bcefe9c9c89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.545007 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.545242 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.546314 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-42j9g"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.546465 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-etcd-client\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.546586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-encryption-config\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.546839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9bd599-747b-470d-941b-fe7d6ee15be1-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.547358 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80173e6-eb6e-4671-b61c-f223b0f3dc24-serving-cert\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.547579 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lgx9r"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.548751 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-987hp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.549674 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.550619 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n84cj"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.551545 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.552526 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.553587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.556156 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-62nqf"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.559486 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.561326 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.561363 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-w9jtm"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.561576 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.562428 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.570494 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.575209 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.575358 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gjmmz"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.576664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4tg76"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.577932 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.579588 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5h6nt"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.580987 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.582173 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.583222 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.584386 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-svx7c"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.585828 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.587082 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-62nqf"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.588552 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.589868 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-k6mdb"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.590672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.591357 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.592551 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gx8zc"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.593414 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.594046 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gx8zc"] Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.599968 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.615404 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.634094 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-service-ca\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.634152 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-client\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.634181 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-config\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.634234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-ca\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.634284 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2442d67-5fd4-4cde-bf16-afc8b174b487-serving-cert\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.634331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9br9b\" (UniqueName: \"kubernetes.io/projected/4cf9358d-29c3-4296-9ed7-740163adbcb8-kube-api-access-9br9b\") pod \"migrator-59844c95c7-wj5jt\" (UID: \"4cf9358d-29c3-4296-9ed7-740163adbcb8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.634354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrwcc\" (UniqueName: \"kubernetes.io/projected/b2442d67-5fd4-4cde-bf16-afc8b174b487-kube-api-access-lrwcc\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.636323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-service-ca\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.637296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-ca\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.637864 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2442d67-5fd4-4cde-bf16-afc8b174b487-config\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.639860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2442d67-5fd4-4cde-bf16-afc8b174b487-serving-cert\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.641934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2442d67-5fd4-4cde-bf16-afc8b174b487-etcd-client\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.655664 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.674525 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.695328 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.714624 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735339 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-bound-sa-token\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735409 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-trusted-ca\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-certificates\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhsj\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-kube-api-access-skhsj\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735609 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-tls\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.735730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: E0312 14:50:56.736099 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.236073791 +0000 UTC m=+215.880088117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.739722 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.755146 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.774875 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.795287 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.839866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840062 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4519b556-4bf4-4c0a-a3a7-d7441728444d-serving-cert\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghn5p\" (UniqueName: \"kubernetes.io/projected/d1ff5656-430d-4071-9a26-ce6bf8ec844b-kube-api-access-ghn5p\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/0bf32718-d22d-4e55-b158-43a02ef6a67f-kube-api-access-f48kh\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840142 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshvk\" (UniqueName: \"kubernetes.io/projected/24defe84-7690-4b69-b9db-5ee531d7f725-kube-api-access-sshvk\") pod \"multus-admission-controller-857f4d67dd-gjmmz\" (UID: \"24defe84-7690-4b69-b9db-5ee531d7f725\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8fd26103-6087-4ded-9197-cd19279c4413-tmpfs\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24defe84-7690-4b69-b9db-5ee531d7f725-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gjmmz\" (UID: \"24defe84-7690-4b69-b9db-5ee531d7f725\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840188 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-webhook-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hq87\" (UniqueName: \"kubernetes.io/projected/7e3e2355-1e51-4248-8dba-a8f3c45657f9-kube-api-access-6hq87\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840219 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-service-ca-bundle\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-certificates\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840262 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5b8b3d80-e845-47b0-928e-a3faff312e25-signing-cabundle\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840275 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e627486-0771-4742-90d4-a9166283471f-metrics-tls\") pod \"dns-operator-744455d44c-n84cj\" (UID: \"3e627486-0771-4742-90d4-a9166283471f\") " pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840291 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0479cc4-afec-46e5-9472-e82716b4e9b6-config\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-proxy-tls\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrq4q\" (UniqueName: \"kubernetes.io/projected/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-kube-api-access-vrq4q\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9d5c80a-fef6-4eae-a1e9-951f2d72647b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp6fb\" (UID: \"a9d5c80a-fef6-4eae-a1e9-951f2d72647b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840357 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-serving-cert\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndg4x\" (UniqueName: \"kubernetes.io/projected/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-kube-api-access-ndg4x\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840387 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/258f384d-e8e6-410b-acb9-50d871e0d0d6-proxy-tls\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-config\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840434 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxdx7\" (UniqueName: \"kubernetes.io/projected/31435028-adc4-4b77-85d3-5d7659cd80f0-kube-api-access-zxdx7\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840464 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhzd\" (UniqueName: \"kubernetes.io/projected/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-kube-api-access-xrhzd\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840490 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbxr\" (UniqueName: \"kubernetes.io/projected/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-kube-api-access-xxbxr\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxgg\" (UniqueName: \"kubernetes.io/projected/8ecc652e-061a-4e8a-8757-d6eea707acf1-kube-api-access-mwxgg\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-metrics-certs\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1ff5656-430d-4071-9a26-ce6bf8ec844b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ecc652e-061a-4e8a-8757-d6eea707acf1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7l2\" (UniqueName: \"kubernetes.io/projected/a9d5c80a-fef6-4eae-a1e9-951f2d72647b-kube-api-access-mg7l2\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp6fb\" (UID: \"a9d5c80a-fef6-4eae-a1e9-951f2d72647b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840624 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840663 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e8afb95-8d48-45d4-88f7-900c0dc949f3-cert\") pod \"ingress-canary-62nqf\" (UID: \"1e8afb95-8d48-45d4-88f7-900c0dc949f3\") " pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0479cc4-afec-46e5-9472-e82716b4e9b6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840692 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-apiservice-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-profile-collector-cert\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8pkh\" (UniqueName: \"kubernetes.io/projected/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-kube-api-access-x8pkh\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840744 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0479cc4-afec-46e5-9472-e82716b4e9b6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc63079e-bbae-4de6-b756-e23a6df3f250-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j2cpq\" (UID: \"fc63079e-bbae-4de6-b756-e23a6df3f250\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb097132-cc58-4d48-82c5-1e9f0fc0d967-config\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e84ddfa-88f9-4e3b-9708-65796373121b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840817 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-plugins-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ecc652e-061a-4e8a-8757-d6eea707acf1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840851 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ecc652e-061a-4e8a-8757-d6eea707acf1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840866 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-service-ca\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.840814 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e3e2355-1e51-4248-8dba-a8f3c45657f9-metrics-tls\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841472 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1ff5656-430d-4071-9a26-ce6bf8ec844b-trusted-ca\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-srv-cert\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841553 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-mountpoint-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841570 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mtl\" (UniqueName: \"kubernetes.io/projected/1e8afb95-8d48-45d4-88f7-900c0dc949f3-kube-api-access-c5mtl\") pod \"ingress-canary-62nqf\" (UID: \"1e8afb95-8d48-45d4-88f7-900c0dc949f3\") " pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb097132-cc58-4d48-82c5-1e9f0fc0d967-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841637 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvlg5\" (UniqueName: \"kubernetes.io/projected/51605fc6-0da6-4a38-b44a-d8d47080ff6a-kube-api-access-wvlg5\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmjmm\" (UniqueName: \"kubernetes.io/projected/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-kube-api-access-xmjmm\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841738 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/258f384d-e8e6-410b-acb9-50d871e0d0d6-images\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhsj\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-kube-api-access-skhsj\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841770 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e84ddfa-88f9-4e3b-9708-65796373121b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb097132-cc58-4d48-82c5-1e9f0fc0d967-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841835 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-config\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841851 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddccm\" (UniqueName: \"kubernetes.io/projected/258f384d-e8e6-410b-acb9-50d871e0d0d6-kube-api-access-ddccm\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841865 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-certs\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841883 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/258f384d-e8e6-410b-acb9-50d871e0d0d6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1ff5656-430d-4071-9a26-ce6bf8ec844b-metrics-tls\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841921 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bf32718-d22d-4e55-b158-43a02ef6a67f-secret-volume\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnp2t\" (UniqueName: \"kubernetes.io/projected/8fd26103-6087-4ded-9197-cd19279c4413-kube-api-access-pnp2t\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841951 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxhm\" (UniqueName: \"kubernetes.io/projected/579d6f1b-e8f5-4d51-9527-41988322b007-kube-api-access-gbxhm\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.841985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5b8b3d80-e845-47b0-928e-a3faff312e25-signing-key\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-oauth-serving-cert\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842028 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-certificates\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf32718-d22d-4e55-b158-43a02ef6a67f-config-volume\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842131 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-tls\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842149 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e3e2355-1e51-4248-8dba-a8f3c45657f9-config-volume\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842164 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842181 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhk4\" (UniqueName: \"kubernetes.io/projected/35632592-89c3-413c-97d1-da2931f1a778-kube-api-access-fvhk4\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842235 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-trusted-ca-bundle\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842259 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-profile-collector-cert\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842274 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-csi-data-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.842490 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844215 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-config\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xgd\" (UniqueName: \"kubernetes.io/projected/3e627486-0771-4742-90d4-a9166283471f-kube-api-access-x9xgd\") pod \"dns-operator-744455d44c-n84cj\" (UID: \"3e627486-0771-4742-90d4-a9166283471f\") " pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844300 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnzr\" (UniqueName: \"kubernetes.io/projected/faef0d2c-43e3-4bc4-92a1-e5c6b08cd982-kube-api-access-cdnzr\") pod \"package-server-manager-789f6589d5-bb9w7\" (UID: \"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844329 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwb9\" (UniqueName: \"kubernetes.io/projected/4519b556-4bf4-4c0a-a3a7-d7441728444d-kube-api-access-wjwb9\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzbg\" (UniqueName: \"kubernetes.io/projected/5b8b3d80-e845-47b0-928e-a3faff312e25-kube-api-access-ffzbg\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844394 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-srv-cert\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844427 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-socket-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-stats-auth\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-registration-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-bound-sa-token\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkx8k\" (UniqueName: \"kubernetes.io/projected/35da9b9e-133b-4d7f-a32e-908d9fc7734b-kube-api-access-fkx8k\") pod \"auto-csr-approver-29555450-w9jtm\" (UID: \"35da9b9e-133b-4d7f-a32e-908d9fc7734b\") " pod="openshift-infra/auto-csr-approver-29555450-w9jtm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6bhw\" (UniqueName: \"kubernetes.io/projected/6e84ddfa-88f9-4e3b-9708-65796373121b-kube-api-access-h6bhw\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844841 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.844861 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-oauth-config\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.845159 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faef0d2c-43e3-4bc4-92a1-e5c6b08cd982-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bb9w7\" (UID: \"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.845276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-serving-cert\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.845394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-trusted-ca\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.845518 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.846692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-trusted-ca\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.846806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51605fc6-0da6-4a38-b44a-d8d47080ff6a-service-ca-bundle\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.846884 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-node-bootstrap-token\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.846994 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhq5j\" (UniqueName: \"kubernetes.io/projected/fc63079e-bbae-4de6-b756-e23a6df3f250-kube-api-access-dhq5j\") pod \"cluster-samples-operator-665b6dd947-j2cpq\" (UID: \"fc63079e-bbae-4de6-b756-e23a6df3f250\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.847083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-default-certificate\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: E0312 14:50:56.847569 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.347549694 +0000 UTC m=+215.991563920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.855573 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.855689 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-tls\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.856260 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.874096 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.894438 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.914908 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.934585 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948646 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e3e2355-1e51-4248-8dba-a8f3c45657f9-metrics-tls\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948697 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1ff5656-430d-4071-9a26-ce6bf8ec844b-trusted-ca\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-srv-cert\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb097132-cc58-4d48-82c5-1e9f0fc0d967-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-mountpoint-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mtl\" (UniqueName: \"kubernetes.io/projected/1e8afb95-8d48-45d4-88f7-900c0dc949f3-kube-api-access-c5mtl\") pod \"ingress-canary-62nqf\" (UID: \"1e8afb95-8d48-45d4-88f7-900c0dc949f3\") " pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948906 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvlg5\" (UniqueName: \"kubernetes.io/projected/51605fc6-0da6-4a38-b44a-d8d47080ff6a-kube-api-access-wvlg5\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948959 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmjmm\" (UniqueName: \"kubernetes.io/projected/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-kube-api-access-xmjmm\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/258f384d-e8e6-410b-acb9-50d871e0d0d6-images\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949047 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e84ddfa-88f9-4e3b-9708-65796373121b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb097132-cc58-4d48-82c5-1e9f0fc0d967-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-config\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddccm\" (UniqueName: \"kubernetes.io/projected/258f384d-e8e6-410b-acb9-50d871e0d0d6-kube-api-access-ddccm\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-certs\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1ff5656-430d-4071-9a26-ce6bf8ec844b-metrics-tls\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bf32718-d22d-4e55-b158-43a02ef6a67f-secret-volume\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnp2t\" (UniqueName: \"kubernetes.io/projected/8fd26103-6087-4ded-9197-cd19279c4413-kube-api-access-pnp2t\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949270 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbxhm\" (UniqueName: \"kubernetes.io/projected/579d6f1b-e8f5-4d51-9527-41988322b007-kube-api-access-gbxhm\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/258f384d-e8e6-410b-acb9-50d871e0d0d6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949313 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5b8b3d80-e845-47b0-928e-a3faff312e25-signing-key\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-oauth-serving-cert\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf32718-d22d-4e55-b158-43a02ef6a67f-config-volume\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949389 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949409 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e3e2355-1e51-4248-8dba-a8f3c45657f9-config-volume\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949466 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhk4\" (UniqueName: \"kubernetes.io/projected/35632592-89c3-413c-97d1-da2931f1a778-kube-api-access-fvhk4\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949659 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/258f384d-e8e6-410b-acb9-50d871e0d0d6-images\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.948986 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-mountpoint-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.949693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-trusted-ca-bundle\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.950874 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-profile-collector-cert\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.950926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-csi-data-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.950967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-config\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.950993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xgd\" (UniqueName: \"kubernetes.io/projected/3e627486-0771-4742-90d4-a9166283471f-kube-api-access-x9xgd\") pod \"dns-operator-744455d44c-n84cj\" (UID: \"3e627486-0771-4742-90d4-a9166283471f\") " pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdnzr\" (UniqueName: \"kubernetes.io/projected/faef0d2c-43e3-4bc4-92a1-e5c6b08cd982-kube-api-access-cdnzr\") pod \"package-server-manager-789f6589d5-bb9w7\" (UID: \"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951054 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwb9\" (UniqueName: \"kubernetes.io/projected/4519b556-4bf4-4c0a-a3a7-d7441728444d-kube-api-access-wjwb9\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzbg\" (UniqueName: \"kubernetes.io/projected/5b8b3d80-e845-47b0-928e-a3faff312e25-kube-api-access-ffzbg\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951284 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-srv-cert\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951312 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-socket-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951344 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-stats-auth\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951391 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkx8k\" (UniqueName: \"kubernetes.io/projected/35da9b9e-133b-4d7f-a32e-908d9fc7734b-kube-api-access-fkx8k\") pod \"auto-csr-approver-29555450-w9jtm\" (UID: \"35da9b9e-133b-4d7f-a32e-908d9fc7734b\") " pod="openshift-infra/auto-csr-approver-29555450-w9jtm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6bhw\" (UniqueName: \"kubernetes.io/projected/6e84ddfa-88f9-4e3b-9708-65796373121b-kube-api-access-h6bhw\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951484 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-registration-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951530 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-oauth-config\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951571 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951599 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faef0d2c-43e3-4bc4-92a1-e5c6b08cd982-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bb9w7\" (UID: \"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951643 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-serving-cert\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51605fc6-0da6-4a38-b44a-d8d47080ff6a-service-ca-bundle\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhq5j\" (UniqueName: \"kubernetes.io/projected/fc63079e-bbae-4de6-b756-e23a6df3f250-kube-api-access-dhq5j\") pod \"cluster-samples-operator-665b6dd947-j2cpq\" (UID: \"fc63079e-bbae-4de6-b756-e23a6df3f250\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-default-certificate\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-node-bootstrap-token\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951823 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghn5p\" (UniqueName: \"kubernetes.io/projected/d1ff5656-430d-4071-9a26-ce6bf8ec844b-kube-api-access-ghn5p\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951847 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/0bf32718-d22d-4e55-b158-43a02ef6a67f-kube-api-access-f48kh\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951907 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4519b556-4bf4-4c0a-a3a7-d7441728444d-serving-cert\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sshvk\" (UniqueName: \"kubernetes.io/projected/24defe84-7690-4b69-b9db-5ee531d7f725-kube-api-access-sshvk\") pod \"multus-admission-controller-857f4d67dd-gjmmz\" (UID: \"24defe84-7690-4b69-b9db-5ee531d7f725\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.951980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8fd26103-6087-4ded-9197-cd19279c4413-tmpfs\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24defe84-7690-4b69-b9db-5ee531d7f725-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gjmmz\" (UID: \"24defe84-7690-4b69-b9db-5ee531d7f725\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-webhook-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hq87\" (UniqueName: \"kubernetes.io/projected/7e3e2355-1e51-4248-8dba-a8f3c45657f9-kube-api-access-6hq87\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5b8b3d80-e845-47b0-928e-a3faff312e25-signing-cabundle\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952133 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e627486-0771-4742-90d4-a9166283471f-metrics-tls\") pod \"dns-operator-744455d44c-n84cj\" (UID: \"3e627486-0771-4742-90d4-a9166283471f\") " pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-service-ca-bundle\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952169 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/258f384d-e8e6-410b-acb9-50d871e0d0d6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952188 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0479cc4-afec-46e5-9472-e82716b4e9b6-config\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-proxy-tls\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrq4q\" (UniqueName: \"kubernetes.io/projected/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-kube-api-access-vrq4q\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952392 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9d5c80a-fef6-4eae-a1e9-951f2d72647b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp6fb\" (UID: \"a9d5c80a-fef6-4eae-a1e9-951f2d72647b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-serving-cert\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952466 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndg4x\" (UniqueName: \"kubernetes.io/projected/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-kube-api-access-ndg4x\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/258f384d-e8e6-410b-acb9-50d871e0d0d6-proxy-tls\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-config\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952712 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxdx7\" (UniqueName: \"kubernetes.io/projected/31435028-adc4-4b77-85d3-5d7659cd80f0-kube-api-access-zxdx7\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhzd\" (UniqueName: \"kubernetes.io/projected/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-kube-api-access-xrhzd\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbxr\" (UniqueName: \"kubernetes.io/projected/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-kube-api-access-xxbxr\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxgg\" (UniqueName: \"kubernetes.io/projected/8ecc652e-061a-4e8a-8757-d6eea707acf1-kube-api-access-mwxgg\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-metrics-certs\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1ff5656-430d-4071-9a26-ce6bf8ec844b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ecc652e-061a-4e8a-8757-d6eea707acf1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953000 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7l2\" (UniqueName: \"kubernetes.io/projected/a9d5c80a-fef6-4eae-a1e9-951f2d72647b-kube-api-access-mg7l2\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp6fb\" (UID: \"a9d5c80a-fef6-4eae-a1e9-951f2d72647b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953065 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e8afb95-8d48-45d4-88f7-900c0dc949f3-cert\") pod \"ingress-canary-62nqf\" (UID: \"1e8afb95-8d48-45d4-88f7-900c0dc949f3\") " pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0479cc4-afec-46e5-9472-e82716b4e9b6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953132 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-apiservice-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-profile-collector-cert\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953194 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8pkh\" (UniqueName: \"kubernetes.io/projected/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-kube-api-access-x8pkh\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0479cc4-afec-46e5-9472-e82716b4e9b6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.952962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0479cc4-afec-46e5-9472-e82716b4e9b6-config\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953270 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc63079e-bbae-4de6-b756-e23a6df3f250-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j2cpq\" (UID: \"fc63079e-bbae-4de6-b756-e23a6df3f250\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb097132-cc58-4d48-82c5-1e9f0fc0d967-config\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e84ddfa-88f9-4e3b-9708-65796373121b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.953419 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-plugins-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-socket-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954291 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-registration-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-srv-cert\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954720 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ecc652e-061a-4e8a-8757-d6eea707acf1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954776 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb097132-cc58-4d48-82c5-1e9f0fc0d967-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ecc652e-061a-4e8a-8757-d6eea707acf1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-service-ca\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.954893 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-plugins-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.955012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/579d6f1b-e8f5-4d51-9527-41988322b007-csi-data-dir\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.955199 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ecc652e-061a-4e8a-8757-d6eea707acf1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:56 crc kubenswrapper[4832]: E0312 14:50:56.955427 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.455409894 +0000 UTC m=+216.099424120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.955546 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.956254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bf32718-d22d-4e55-b158-43a02ef6a67f-secret-volume\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.957381 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-oauth-serving-cert\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.957960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8fd26103-6087-4ded-9197-cd19279c4413-tmpfs\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.958411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb097132-cc58-4d48-82c5-1e9f0fc0d967-config\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.958583 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf32718-d22d-4e55-b158-43a02ef6a67f-config-volume\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.958782 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-config\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.959010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-trusted-ca-bundle\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.959368 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-oauth-config\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.960154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-proxy-tls\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.960225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-profile-collector-cert\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.960408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-serving-cert\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.960681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/258f384d-e8e6-410b-acb9-50d871e0d0d6-proxy-tls\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.961001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0479cc4-afec-46e5-9472-e82716b4e9b6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.961411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-service-ca\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.962789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc63079e-bbae-4de6-b756-e23a6df3f250-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j2cpq\" (UID: \"fc63079e-bbae-4de6-b756-e23a6df3f250\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.962829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.964191 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-srv-cert\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.964299 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-profile-collector-cert\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.975215 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 14:50:56 crc kubenswrapper[4832]: I0312 14:50:56.994473 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.019883 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.026272 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.034252 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.041702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ecc652e-061a-4e8a-8757-d6eea707acf1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.054847 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.055415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.056654 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.556626852 +0000 UTC m=+216.200641078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.057007 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.057640 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.557633301 +0000 UTC m=+216.201647527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.074903 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.095162 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.116285 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.121372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4519b556-4bf4-4c0a-a3a7-d7441728444d-serving-cert\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.134609 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.142020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-config\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.158393 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.158635 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.658606512 +0000 UTC m=+216.302620768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.159170 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.159732 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.659711843 +0000 UTC m=+216.303726109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.165552 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.175395 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.176887 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.179170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4519b556-4bf4-4c0a-a3a7-d7441728444d-service-ca-bundle\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.194831 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.215352 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.244527 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.250554 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e627486-0771-4742-90d4-a9166283471f-metrics-tls\") pod \"dns-operator-744455d44c-n84cj\" (UID: \"3e627486-0771-4742-90d4-a9166283471f\") " pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.256208 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.264722 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.265258 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.765231125 +0000 UTC m=+216.409245381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.265750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.266236 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.766208554 +0000 UTC m=+216.410222820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.275117 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.295611 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.315754 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.334596 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.340660 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.355471 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.359911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.371109 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.371270 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.871246142 +0000 UTC m=+216.515260378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.371570 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.371943 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.871933271 +0000 UTC m=+216.515947507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.374932 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.395400 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.404548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.415428 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.419160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.439188 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.456669 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.472928 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.473353 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.973325435 +0000 UTC m=+216.617339671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.473807 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.474330 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:57.974306443 +0000 UTC m=+216.618320669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.475563 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.493310 4832 request.go:700] Waited for 1.001321916s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&limit=500&resourceVersion=0 Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.495415 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.515768 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.523957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1ff5656-430d-4071-9a26-ce6bf8ec844b-metrics-tls\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.535176 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.563999 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.570825 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1ff5656-430d-4071-9a26-ce6bf8ec844b-trusted-ca\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.575055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.575123 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.575623 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.075597323 +0000 UTC m=+216.719611549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.594613 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.598609 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e84ddfa-88f9-4e3b-9708-65796373121b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.616237 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.625356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e84ddfa-88f9-4e3b-9708-65796373121b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.636463 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.654752 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.676214 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.677740 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.678359 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.178332254 +0000 UTC m=+216.822346480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.681918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9d5c80a-fef6-4eae-a1e9-951f2d72647b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp6fb\" (UID: \"a9d5c80a-fef6-4eae-a1e9-951f2d72647b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.696436 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.716648 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.735412 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.740788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24defe84-7690-4b69-b9db-5ee531d7f725-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gjmmz\" (UID: \"24defe84-7690-4b69-b9db-5ee531d7f725\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.756880 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.775449 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.779699 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.779942 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.279914263 +0000 UTC m=+216.923928529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.780129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.780564 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.28049697 +0000 UTC m=+216.924511236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.795053 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.809327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5b8b3d80-e845-47b0-928e-a3faff312e25-signing-key\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.815800 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.818880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5b8b3d80-e845-47b0-928e-a3faff312e25-signing-cabundle\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.835249 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.854353 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.858979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faef0d2c-43e3-4bc4-92a1-e5c6b08cd982-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bb9w7\" (UID: \"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.874853 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.878625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-serving-cert\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.881271 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.881476 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.3814481 +0000 UTC m=+217.025462336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.881818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.882149 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.38213479 +0000 UTC m=+217.026149026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.895171 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.897942 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-config\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.915291 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.934620 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.949168 4832 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.949320 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e3e2355-1e51-4248-8dba-a8f3c45657f9-metrics-tls podName:7e3e2355-1e51-4248-8dba-a8f3c45657f9 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.449282536 +0000 UTC m=+217.093296812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e3e2355-1e51-4248-8dba-a8f3c45657f9-metrics-tls") pod "dns-default-svx7c" (UID: "7e3e2355-1e51-4248-8dba-a8f3c45657f9") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.949610 4832 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.949707 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-certs podName:35632592-89c3-413c-97d1-da2931f1a778 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.449683247 +0000 UTC m=+217.093697513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-certs") pod "machine-config-server-k6mdb" (UID: "35632592-89c3-413c-97d1-da2931f1a778") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.954448 4832 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.954518 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-metrics-certs podName:51605fc6-0da6-4a38-b44a-d8d47080ff6a nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.454489236 +0000 UTC m=+217.098503472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-metrics-certs") pod "router-default-5444994796-jthtj" (UID: "51605fc6-0da6-4a38-b44a-d8d47080ff6a") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.954541 4832 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.954647 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-stats-auth podName:51605fc6-0da6-4a38-b44a-d8d47080ff6a nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.454619429 +0000 UTC m=+217.098633705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-stats-auth") pod "router-default-5444994796-jthtj" (UID: "51605fc6-0da6-4a38-b44a-d8d47080ff6a") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.954655 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.956905 4832 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.956949 4832 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.956993 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-apiservice-cert podName:8fd26103-6087-4ded-9197-cd19279c4413 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.456972547 +0000 UTC m=+217.100986783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-apiservice-cert") pod "packageserver-d55dfcdfc-grxzk" (UID: "8fd26103-6087-4ded-9197-cd19279c4413") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957021 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8afb95-8d48-45d4-88f7-900c0dc949f3-cert podName:1e8afb95-8d48-45d4-88f7-900c0dc949f3 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.457003048 +0000 UTC m=+217.101017344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1e8afb95-8d48-45d4-88f7-900c0dc949f3-cert") pod "ingress-canary-62nqf" (UID: "1e8afb95-8d48-45d4-88f7-900c0dc949f3") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957079 4832 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957140 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/51605fc6-0da6-4a38-b44a-d8d47080ff6a-service-ca-bundle podName:51605fc6-0da6-4a38-b44a-d8d47080ff6a nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.457130282 +0000 UTC m=+217.101144518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/51605fc6-0da6-4a38-b44a-d8d47080ff6a-service-ca-bundle") pod "router-default-5444994796-jthtj" (UID: "51605fc6-0da6-4a38-b44a-d8d47080ff6a") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957214 4832 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957218 4832 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957282 4832 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957304 4832 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957273 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-default-certificate podName:51605fc6-0da6-4a38-b44a-d8d47080ff6a nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.457256375 +0000 UTC m=+217.101270681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-default-certificate") pod "router-default-5444994796-jthtj" (UID: "51605fc6-0da6-4a38-b44a-d8d47080ff6a") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957387 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-webhook-cert podName:8fd26103-6087-4ded-9197-cd19279c4413 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.457361278 +0000 UTC m=+217.101375504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-webhook-cert") pod "packageserver-d55dfcdfc-grxzk" (UID: "8fd26103-6087-4ded-9197-cd19279c4413") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957406 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e3e2355-1e51-4248-8dba-a8f3c45657f9-config-volume podName:7e3e2355-1e51-4248-8dba-a8f3c45657f9 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.457399159 +0000 UTC m=+217.101413485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e3e2355-1e51-4248-8dba-a8f3c45657f9-config-volume") pod "dns-default-svx7c" (UID: "7e3e2355-1e51-4248-8dba-a8f3c45657f9") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.957419 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-node-bootstrap-token podName:35632592-89c3-413c-97d1-da2931f1a778 nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.45741315 +0000 UTC m=+217.101427376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-node-bootstrap-token") pod "machine-config-server-k6mdb" (UID: "35632592-89c3-413c-97d1-da2931f1a778") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.974644 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.983079 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.983223 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.483201833 +0000 UTC m=+217.127216059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.987993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:57 crc kubenswrapper[4832]: E0312 14:50:57.988996 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.48898138 +0000 UTC m=+217.132995606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:57 crc kubenswrapper[4832]: I0312 14:50:57.994677 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.015081 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.034665 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.055658 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.075444 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.089187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.089379 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.589354054 +0000 UTC m=+217.233368280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.089658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.090001 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.589985312 +0000 UTC m=+217.233999598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.094304 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.115111 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.135282 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.155308 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.176463 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.190853 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.191210 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.691178039 +0000 UTC m=+217.335192305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.192160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.192756 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.692733514 +0000 UTC m=+217.336747770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.213906 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mg44\" (UniqueName: \"kubernetes.io/projected/ef9bd599-747b-470d-941b-fe7d6ee15be1-kube-api-access-8mg44\") pod \"route-controller-manager-6576b87f9c-z4rcp\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.231251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-577sv\" (UniqueName: \"kubernetes.io/projected/780e312c-4f87-40d8-b146-0bcefe9c9c89-kube-api-access-577sv\") pod \"machine-api-operator-5694c8668f-5g46q\" (UID: \"780e312c-4f87-40d8-b146-0bcefe9c9c89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.249777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntms8\" (UniqueName: \"kubernetes.io/projected/3616a4cb-ef1d-4125-b880-5b1486eb1d55-kube-api-access-ntms8\") pod \"machine-approver-56656f9798-b9mvx\" (UID: \"3616a4cb-ef1d-4125-b880-5b1486eb1d55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.273082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z846d\" (UniqueName: \"kubernetes.io/projected/d80173e6-eb6e-4671-b61c-f223b0f3dc24-kube-api-access-z846d\") pod \"apiserver-76f77b778f-8x4hd\" (UID: \"d80173e6-eb6e-4671-b61c-f223b0f3dc24\") " pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.292757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t6v8\" (UniqueName: \"kubernetes.io/projected/fc57df00-709c-4cee-9d19-a00dca7d58da-kube-api-access-4t6v8\") pod \"openshift-config-operator-7777fb866f-8wfwt\" (UID: \"fc57df00-709c-4cee-9d19-a00dca7d58da\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.293057 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.293138 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.793121438 +0000 UTC m=+217.437135664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.293411 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.294782 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.794771645 +0000 UTC m=+217.438785881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.308210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zr7\" (UniqueName: \"kubernetes.io/projected/1276d8a9-5af1-4a3f-a61c-255ed424ee88-kube-api-access-f9zr7\") pod \"controller-manager-879f6c89f-86f5t\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.329614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl4qb\" (UniqueName: \"kubernetes.io/projected/0e48a27d-76e1-45f3-87af-c9b306291d25-kube-api-access-zl4qb\") pod \"oauth-openshift-558db77b4-lgx9r\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.349323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wgbn\" (UniqueName: \"kubernetes.io/projected/89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d-kube-api-access-6wgbn\") pod \"apiserver-7bbb656c7d-spzsp\" (UID: \"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.368340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.373453 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdxz\" (UniqueName: \"kubernetes.io/projected/86719732-2809-4511-8e2c-9fb82df5c4bc-kube-api-access-dqdxz\") pod \"openshift-apiserver-operator-796bbdcf4f-t2c67\" (UID: \"86719732-2809-4511-8e2c-9fb82df5c4bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.388710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.389384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r24bk\" (UniqueName: \"kubernetes.io/projected/5d4068e0-53ed-433d-9657-ff75730d43a6-kube-api-access-r24bk\") pod \"downloads-7954f5f757-hz5vn\" (UID: \"5d4068e0-53ed-433d-9657-ff75730d43a6\") " pod="openshift-console/downloads-7954f5f757-hz5vn" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.393924 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.394115 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.894087288 +0000 UTC m=+217.538101524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.394483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.394812 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.894793739 +0000 UTC m=+217.538807965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.395322 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.414847 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.415339 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqcjf\" (UniqueName: \"kubernetes.io/projected/a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38-kube-api-access-nqcjf\") pod \"console-operator-58897d9998-xfg24\" (UID: \"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38\") " pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.417166 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hz5vn" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.431951 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.435577 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.455974 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.456009 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.472851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.474850 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.478382 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.493353 4832 request.go:700] Waited for 1.902453604s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.495531 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.496543 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.496824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e8afb95-8d48-45d4-88f7-900c0dc949f3-cert\") pod \"ingress-canary-62nqf\" (UID: \"1e8afb95-8d48-45d4-88f7-900c0dc949f3\") " pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.496870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-apiservice-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.496927 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e3e2355-1e51-4248-8dba-a8f3c45657f9-metrics-tls\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-certs\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e3e2355-1e51-4248-8dba-a8f3c45657f9-config-volume\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-stats-auth\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51605fc6-0da6-4a38-b44a-d8d47080ff6a-service-ca-bundle\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497282 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-default-certificate\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497302 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-node-bootstrap-token\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497371 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-webhook-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.497460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-metrics-certs\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.498613 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.498763 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:58.998739945 +0000 UTC m=+217.642754171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.499715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e3e2355-1e51-4248-8dba-a8f3c45657f9-config-volume\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.502429 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e3e2355-1e51-4248-8dba-a8f3c45657f9-metrics-tls\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.502442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-apiservice-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.502557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-metrics-certs\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.502564 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd26103-6087-4ded-9197-cd19279c4413-webhook-cert\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.502822 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51605fc6-0da6-4a38-b44a-d8d47080ff6a-service-ca-bundle\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.502936 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-default-certificate\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.503091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51605fc6-0da6-4a38-b44a-d8d47080ff6a-stats-auth\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.504455 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e8afb95-8d48-45d4-88f7-900c0dc949f3-cert\") pod \"ingress-canary-62nqf\" (UID: \"1e8afb95-8d48-45d4-88f7-900c0dc949f3\") " pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.515523 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.525235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-node-bootstrap-token\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.534116 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.535457 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.544444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/35632592-89c3-413c-97d1-da2931f1a778-certs\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.557464 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.577040 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.594822 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.598926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.599477 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.099465469 +0000 UTC m=+217.743479695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.632743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrwcc\" (UniqueName: \"kubernetes.io/projected/b2442d67-5fd4-4cde-bf16-afc8b174b487-kube-api-access-lrwcc\") pod \"etcd-operator-b45778765-2jc2t\" (UID: \"b2442d67-5fd4-4cde-bf16-afc8b174b487\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.652092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" event={"ID":"3616a4cb-ef1d-4125-b880-5b1486eb1d55","Type":"ContainerStarted","Data":"0038034d886fdccf5bf8017258edb073e0d49f7090fb3bd6706435864326571d"} Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.652930 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9br9b\" (UniqueName: \"kubernetes.io/projected/4cf9358d-29c3-4296-9ed7-740163adbcb8-kube-api-access-9br9b\") pod \"migrator-59844c95c7-wj5jt\" (UID: \"4cf9358d-29c3-4296-9ed7-740163adbcb8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.692046 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhsj\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-kube-api-access-skhsj\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.700215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.700472 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.20044757 +0000 UTC m=+217.844461796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.703785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.715213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-bound-sa-token\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.731140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mtl\" (UniqueName: \"kubernetes.io/projected/1e8afb95-8d48-45d4-88f7-900c0dc949f3-kube-api-access-c5mtl\") pod \"ingress-canary-62nqf\" (UID: \"1e8afb95-8d48-45d4-88f7-900c0dc949f3\") " pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.749897 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvlg5\" (UniqueName: \"kubernetes.io/projected/51605fc6-0da6-4a38-b44a-d8d47080ff6a-kube-api-access-wvlg5\") pod \"router-default-5444994796-jthtj\" (UID: \"51605fc6-0da6-4a38-b44a-d8d47080ff6a\") " pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.771364 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddccm\" (UniqueName: \"kubernetes.io/projected/258f384d-e8e6-410b-acb9-50d871e0d0d6-kube-api-access-ddccm\") pod \"machine-config-operator-74547568cd-kmvjl\" (UID: \"258f384d-e8e6-410b-acb9-50d871e0d0d6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.790889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb097132-cc58-4d48-82c5-1e9f0fc0d967-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n2mb5\" (UID: \"bb097132-cc58-4d48-82c5-1e9f0fc0d967\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.803019 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.803371 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.303355297 +0000 UTC m=+217.947369523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.807661 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xgd\" (UniqueName: \"kubernetes.io/projected/3e627486-0771-4742-90d4-a9166283471f-kube-api-access-x9xgd\") pod \"dns-operator-744455d44c-n84cj\" (UID: \"3e627486-0771-4742-90d4-a9166283471f\") " pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.809210 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp"] Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.836421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdnzr\" (UniqueName: \"kubernetes.io/projected/faef0d2c-43e3-4bc4-92a1-e5c6b08cd982-kube-api-access-cdnzr\") pod \"package-server-manager-789f6589d5-bb9w7\" (UID: \"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.838451 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp"] Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.844840 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8x4hd"] Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.848827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzbg\" (UniqueName: \"kubernetes.io/projected/5b8b3d80-e845-47b0-928e-a3faff312e25-kube-api-access-ffzbg\") pod \"service-ca-9c57cc56f-5h6nt\" (UID: \"5b8b3d80-e845-47b0-928e-a3faff312e25\") " pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.850876 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:58 crc kubenswrapper[4832]: W0312 14:50:58.856927 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89f0a9e3_a2f5_44f4_ac7c_e80b5643df2d.slice/crio-c1b30b20b4b59c52cede66c030b44862229ddc1dcd905088a4e79d75ad058960 WatchSource:0}: Error finding container c1b30b20b4b59c52cede66c030b44862229ddc1dcd905088a4e79d75ad058960: Status 404 returned error can't find the container with id c1b30b20b4b59c52cede66c030b44862229ddc1dcd905088a4e79d75ad058960 Mar 12 14:50:58 crc kubenswrapper[4832]: W0312 14:50:58.857580 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80173e6_eb6e_4671_b61c_f223b0f3dc24.slice/crio-6f06d854398c17d1cda00b45b466c9af8817ef76a758e4c95ca78f684794d4cc WatchSource:0}: Error finding container 6f06d854398c17d1cda00b45b466c9af8817ef76a758e4c95ca78f684794d4cc: Status 404 returned error can't find the container with id 6f06d854398c17d1cda00b45b466c9af8817ef76a758e4c95ca78f684794d4cc Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.863153 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.868063 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-62nqf" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.874167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxdx7\" (UniqueName: \"kubernetes.io/projected/31435028-adc4-4b77-85d3-5d7659cd80f0-kube-api-access-zxdx7\") pod \"marketplace-operator-79b997595-ph4hc\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.876789 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.884997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.884924 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86f5t"] Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.889749 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhzd\" (UniqueName: \"kubernetes.io/projected/b968d323-a039-4d05-9e1f-1d9d3b0ab1a1-kube-api-access-xrhzd\") pod \"service-ca-operator-777779d784-xj2sf\" (UID: \"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.903990 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:58 crc kubenswrapper[4832]: E0312 14:50:58.904656 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.404622936 +0000 UTC m=+218.048637152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.909064 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xfg24"] Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.911328 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbxr\" (UniqueName: \"kubernetes.io/projected/2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad-kube-api-access-xxbxr\") pod \"olm-operator-6b444d44fb-txst6\" (UID: \"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.928778 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.934135 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxgg\" (UniqueName: \"kubernetes.io/projected/8ecc652e-061a-4e8a-8757-d6eea707acf1-kube-api-access-mwxgg\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.958157 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1ff5656-430d-4071-9a26-ce6bf8ec844b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.963297 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lgx9r"] Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.964410 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hz5vn"] Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.974780 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkx8k\" (UniqueName: \"kubernetes.io/projected/35da9b9e-133b-4d7f-a32e-908d9fc7734b-kube-api-access-fkx8k\") pod \"auto-csr-approver-29555450-w9jtm\" (UID: \"35da9b9e-133b-4d7f-a32e-908d9fc7734b\") " pod="openshift-infra/auto-csr-approver-29555450-w9jtm" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.975107 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:50:58 crc kubenswrapper[4832]: W0312 14:50:58.989027 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d4068e0_53ed_433d_9657_ff75730d43a6.slice/crio-0483b06e72875182fc64d71b90b2cb2819048ffbdd5d39cfef11fb1659d1c8c0 WatchSource:0}: Error finding container 0483b06e72875182fc64d71b90b2cb2819048ffbdd5d39cfef11fb1659d1c8c0: Status 404 returned error can't find the container with id 0483b06e72875182fc64d71b90b2cb2819048ffbdd5d39cfef11fb1659d1c8c0 Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.990137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6bhw\" (UniqueName: \"kubernetes.io/projected/6e84ddfa-88f9-4e3b-9708-65796373121b-kube-api-access-h6bhw\") pod \"kube-storage-version-migrator-operator-b67b599dd-jm6kd\" (UID: \"6e84ddfa-88f9-4e3b-9708-65796373121b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:58 crc kubenswrapper[4832]: I0312 14:50:58.999577 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.009784 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.010270 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.510255551 +0000 UTC m=+218.154269777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.021188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhk4\" (UniqueName: \"kubernetes.io/projected/35632592-89c3-413c-97d1-da2931f1a778-kube-api-access-fvhk4\") pod \"machine-config-server-k6mdb\" (UID: \"35632592-89c3-413c-97d1-da2931f1a778\") " pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.026785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.040194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnp2t\" (UniqueName: \"kubernetes.io/projected/8fd26103-6087-4ded-9197-cd19279c4413-kube-api-access-pnp2t\") pod \"packageserver-d55dfcdfc-grxzk\" (UID: \"8fd26103-6087-4ded-9197-cd19279c4413\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.047945 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.059831 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5g46q"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.062606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.068783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0479cc4-afec-46e5-9472-e82716b4e9b6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8xd72\" (UID: \"d0479cc4-afec-46e5-9472-e82716b4e9b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.072367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ecc652e-061a-4e8a-8757-d6eea707acf1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-256sp\" (UID: \"8ecc652e-061a-4e8a-8757-d6eea707acf1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.089628 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.093348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.097139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndg4x\" (UniqueName: \"kubernetes.io/projected/44a36bdc-062d-4e68-abb8-4ec20ba3e41b-kube-api-access-ndg4x\") pod \"catalog-operator-68c6474976-d8ntm\" (UID: \"44a36bdc-062d-4e68-abb8-4ec20ba3e41b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.110993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.111328 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.611312325 +0000 UTC m=+218.255326551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.113352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghn5p\" (UniqueName: \"kubernetes.io/projected/d1ff5656-430d-4071-9a26-ce6bf8ec844b-kube-api-access-ghn5p\") pod \"ingress-operator-5b745b69d9-dvb29\" (UID: \"d1ff5656-430d-4071-9a26-ce6bf8ec844b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.115856 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.122841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.130070 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.138449 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwb9\" (UniqueName: \"kubernetes.io/projected/4519b556-4bf4-4c0a-a3a7-d7441728444d-kube-api-access-wjwb9\") pod \"authentication-operator-69f744f599-987hp\" (UID: \"4519b556-4bf4-4c0a-a3a7-d7441728444d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.141987 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.151058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmjmm\" (UniqueName: \"kubernetes.io/projected/9bcdaf53-8f1a-4748-96bc-721dc6b821fc-kube-api-access-xmjmm\") pod \"openshift-controller-manager-operator-756b6f6bc6-cdjkj\" (UID: \"9bcdaf53-8f1a-4748-96bc-721dc6b821fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.170164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhq5j\" (UniqueName: \"kubernetes.io/projected/fc63079e-bbae-4de6-b756-e23a6df3f250-kube-api-access-dhq5j\") pod \"cluster-samples-operator-665b6dd947-j2cpq\" (UID: \"fc63079e-bbae-4de6-b756-e23a6df3f250\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.175946 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k6mdb" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.192540 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129d8f6c-5fb6-48ca-b269-c8c17a3a3efe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5c222\" (UID: \"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.207540 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.207991 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.212014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.212314 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.712302506 +0000 UTC m=+218.356316732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.212643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/0bf32718-d22d-4e55-b158-43a02ef6a67f-kube-api-access-f48kh\") pod \"collect-profiles-29555445-8xpmk\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.213134 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.235419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8pkh\" (UniqueName: \"kubernetes.io/projected/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-kube-api-access-x8pkh\") pod \"console-f9d7485db-42j9g\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.258309 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbxhm\" (UniqueName: \"kubernetes.io/projected/579d6f1b-e8f5-4d51-9527-41988322b007-kube-api-access-gbxhm\") pod \"csi-hostpathplugin-gx8zc\" (UID: \"579d6f1b-e8f5-4d51-9527-41988322b007\") " pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.275714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrq4q\" (UniqueName: \"kubernetes.io/projected/244172fe-e44e-4f8f-86d5-69f70a7c5dd0-kube-api-access-vrq4q\") pod \"machine-config-controller-84d6567774-dc9cx\" (UID: \"244172fe-e44e-4f8f-86d5-69f70a7c5dd0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.295807 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sshvk\" (UniqueName: \"kubernetes.io/projected/24defe84-7690-4b69-b9db-5ee531d7f725-kube-api-access-sshvk\") pod \"multus-admission-controller-857f4d67dd-gjmmz\" (UID: \"24defe84-7690-4b69-b9db-5ee531d7f725\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.311568 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.312786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.313296 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.813282327 +0000 UTC m=+218.457296553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.317030 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.318134 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hq87\" (UniqueName: \"kubernetes.io/projected/7e3e2355-1e51-4248-8dba-a8f3c45657f9-kube-api-access-6hq87\") pod \"dns-default-svx7c\" (UID: \"7e3e2355-1e51-4248-8dba-a8f3c45657f9\") " pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.332603 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.337938 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7l2\" (UniqueName: \"kubernetes.io/projected/a9d5c80a-fef6-4eae-a1e9-951f2d72647b-kube-api-access-mg7l2\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp6fb\" (UID: \"a9d5c80a-fef6-4eae-a1e9-951f2d72647b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.344481 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.369426 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.378930 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.384004 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.390859 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-w9jtm"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.399697 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.400067 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.407574 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.409912 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.414571 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.414884 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:50:59.914869636 +0000 UTC m=+218.558883862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.424470 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2jc2t"] Mar 12 14:50:59 crc kubenswrapper[4832]: W0312 14:50:59.425630 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35632592_89c3_413c_97d1_da2931f1a778.slice/crio-46e7429375c4ed18638b08348763d406de103c7e78d339683f8be6e2b2a53e77 WatchSource:0}: Error finding container 46e7429375c4ed18638b08348763d406de103c7e78d339683f8be6e2b2a53e77: Status 404 returned error can't find the container with id 46e7429375c4ed18638b08348763d406de103c7e78d339683f8be6e2b2a53e77 Mar 12 14:50:59 crc kubenswrapper[4832]: W0312 14:50:59.433563 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb097132_cc58_4d48_82c5_1e9f0fc0d967.slice/crio-877021f7b3588c5678651f110f80fc607e18b7535f31f5c93c383af411853a0f WatchSource:0}: Error finding container 877021f7b3588c5678651f110f80fc607e18b7535f31f5c93c383af411853a0f: Status 404 returned error can't find the container with id 877021f7b3588c5678651f110f80fc607e18b7535f31f5c93c383af411853a0f Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.459303 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.459519 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-svx7c" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.479202 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-62nqf"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.482129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" Mar 12 14:50:59 crc kubenswrapper[4832]: W0312 14:50:59.503012 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea7eb5a_39cf_4ff5_a9d3_0621a50f09ad.slice/crio-c3ec134f2e8c36341e28fe60dcace3873eb0e953a21bd9d339732a0336c5af62 WatchSource:0}: Error finding container c3ec134f2e8c36341e28fe60dcace3873eb0e953a21bd9d339732a0336c5af62: Status 404 returned error can't find the container with id c3ec134f2e8c36341e28fe60dcace3873eb0e953a21bd9d339732a0336c5af62 Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.510395 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.512725 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.515198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.515631 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.01561918 +0000 UTC m=+218.659633406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.520464 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.617425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.617716 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.117702773 +0000 UTC m=+218.761716999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.674278 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" event={"ID":"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad","Type":"ContainerStarted","Data":"c3ec134f2e8c36341e28fe60dcace3873eb0e953a21bd9d339732a0336c5af62"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.679274 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" event={"ID":"86719732-2809-4511-8e2c-9fb82df5c4bc","Type":"ContainerStarted","Data":"55fc7fec95d6d7b0336ad2fb820f5eff805253aeb7d05640a6da3a090cd19e82"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.681566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" event={"ID":"1276d8a9-5af1-4a3f-a61c-255ed424ee88","Type":"ContainerStarted","Data":"979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.681596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" event={"ID":"1276d8a9-5af1-4a3f-a61c-255ed424ee88","Type":"ContainerStarted","Data":"4d95b1b10193add3054564bbd10dfabe154c5f755774a1fd52133a7e1a8f0b97"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.682278 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.684793 4832 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-86f5t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.684844 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" podUID="1276d8a9-5af1-4a3f-a61c-255ed424ee88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.686846 4832 generic.go:334] "Generic (PLEG): container finished" podID="d80173e6-eb6e-4671-b61c-f223b0f3dc24" containerID="506d9dc3531baad91b181e33c95d3d5dea5b9973752d69bdc0050fd93926fcc2" exitCode=0 Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.686972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" event={"ID":"d80173e6-eb6e-4671-b61c-f223b0f3dc24","Type":"ContainerDied","Data":"506d9dc3531baad91b181e33c95d3d5dea5b9973752d69bdc0050fd93926fcc2"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.687019 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" event={"ID":"d80173e6-eb6e-4671-b61c-f223b0f3dc24","Type":"ContainerStarted","Data":"6f06d854398c17d1cda00b45b466c9af8817ef76a758e4c95ca78f684794d4cc"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.691975 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jthtj" event={"ID":"51605fc6-0da6-4a38-b44a-d8d47080ff6a","Type":"ContainerStarted","Data":"297e87f8cbfe7badac528cdd55f33288ca8092eb6612f548dbd443e650792564"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.692215 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jthtj" event={"ID":"51605fc6-0da6-4a38-b44a-d8d47080ff6a","Type":"ContainerStarted","Data":"0f3a837cb68038d8e5efd1b55844a9bca6ce67c2f7ee22f3e879282bfa5b2208"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.694709 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xfg24" event={"ID":"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38","Type":"ContainerStarted","Data":"d9637aa010015c262a1d95bec18a2938c6e95123e9bc6d18fc1d341f717fa6c5"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.694731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xfg24" event={"ID":"a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38","Type":"ContainerStarted","Data":"5936545e0efd697b576de6c2a7fdb14aa8888ebf4103b11918ae081466da4cb2"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.697769 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.698671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" event={"ID":"ef9bd599-747b-470d-941b-fe7d6ee15be1","Type":"ContainerStarted","Data":"7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.698714 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" event={"ID":"ef9bd599-747b-470d-941b-fe7d6ee15be1","Type":"ContainerStarted","Data":"5f233ff0953e4839d2cb610c5c9ba29880d540c0683598a4f42b886efee8eaf1"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.699202 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.699967 4832 patch_prober.go:28] interesting pod/console-operator-58897d9998-xfg24 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.700003 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xfg24" podUID="a4a40d3b-aa51-48bc-bdc0-57f5ca5e5c38" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.700079 4832 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z4rcp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.700095 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" podUID="ef9bd599-747b-470d-941b-fe7d6ee15be1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.701288 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k6mdb" event={"ID":"35632592-89c3-413c-97d1-da2931f1a778","Type":"ContainerStarted","Data":"46e7429375c4ed18638b08348763d406de103c7e78d339683f8be6e2b2a53e77"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.722560 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" event={"ID":"b2442d67-5fd4-4cde-bf16-afc8b174b487","Type":"ContainerStarted","Data":"99e5ebb683470f3f4eb026ed3945e3d54b9749b7143b545b20a7b08c495cc93b"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.723350 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.723711 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.223680508 +0000 UTC m=+218.867694784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.728748 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" event={"ID":"4cf9358d-29c3-4296-9ed7-740163adbcb8","Type":"ContainerStarted","Data":"2d54d1cc321e8fc4a2b10284c162885fe1206563943ef9abfe525b082619a9fb"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.763656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hz5vn" event={"ID":"5d4068e0-53ed-433d-9657-ff75730d43a6","Type":"ContainerStarted","Data":"0b6f15b70196ec4a3d35ea66e0e2c0cf0ed5afd4ccd5c61a8ff07820c5cd54ec"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.763709 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hz5vn" event={"ID":"5d4068e0-53ed-433d-9657-ff75730d43a6","Type":"ContainerStarted","Data":"0483b06e72875182fc64d71b90b2cb2819048ffbdd5d39cfef11fb1659d1c8c0"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.763953 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hz5vn" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.768343 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz5vn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.768403 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hz5vn" podUID="5d4068e0-53ed-433d-9657-ff75730d43a6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.770013 4832 generic.go:334] "Generic (PLEG): container finished" podID="89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d" containerID="b73ce1e2a3715e1ff609c9da615b96cb079fd3dfc897fecbfc852eb97ea81ec6" exitCode=0 Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.770093 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" event={"ID":"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d","Type":"ContainerDied","Data":"b73ce1e2a3715e1ff609c9da615b96cb079fd3dfc897fecbfc852eb97ea81ec6"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.770123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" event={"ID":"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d","Type":"ContainerStarted","Data":"c1b30b20b4b59c52cede66c030b44862229ddc1dcd905088a4e79d75ad058960"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.771962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" event={"ID":"0e48a27d-76e1-45f3-87af-c9b306291d25","Type":"ContainerStarted","Data":"1a96a82daeb1bad3884445fc4d15ae934710e9ff893cb37f8c76033dee3d3eaa"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.773863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" event={"ID":"780e312c-4f87-40d8-b146-0bcefe9c9c89","Type":"ContainerStarted","Data":"35125d7a6b605f885d649ddb933d3ed495338c8c08a88090c93656e9d833bb3e"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.776185 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" event={"ID":"fc57df00-709c-4cee-9d19-a00dca7d58da","Type":"ContainerStarted","Data":"3f415ef70a5c3e7f491464a1b62fc9e4d92c3e891b8c847a2e6fd6ef87e21cdd"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.783084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" event={"ID":"3616a4cb-ef1d-4125-b880-5b1486eb1d55","Type":"ContainerStarted","Data":"c47261e1ce63290c68a1a3d2c09fbb56459c4f1d44946c757be70b4ef11364b7"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.783133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" event={"ID":"3616a4cb-ef1d-4125-b880-5b1486eb1d55","Type":"ContainerStarted","Data":"181b102ec033cfe624bdb8265bca8c39b833ddc8be58b06164c22c3075befd26"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.789245 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf"] Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.801690 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" event={"ID":"35da9b9e-133b-4d7f-a32e-908d9fc7734b","Type":"ContainerStarted","Data":"a2e31dc1e06b8000c3f5da3c5aaa958359671b39e53887d7cef496248edd32b9"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.803533 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" event={"ID":"bb097132-cc58-4d48-82c5-1e9f0fc0d967","Type":"ContainerStarted","Data":"877021f7b3588c5678651f110f80fc607e18b7535f31f5c93c383af411853a0f"} Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.825358 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.826425 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.326405 +0000 UTC m=+218.970419336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.854599 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.891145 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.891207 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 12 14:50:59 crc kubenswrapper[4832]: I0312 14:50:59.927901 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:50:59 crc kubenswrapper[4832]: E0312 14:50:59.928996 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.428970706 +0000 UTC m=+219.072984982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.030909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.031315 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.531289186 +0000 UTC m=+219.175303482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.064323 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jthtj" podStartSLOduration=154.064305158 podStartE2EDuration="2m34.064305158s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:00.063894416 +0000 UTC m=+218.707908642" watchObservedRunningTime="2026-03-12 14:51:00.064305158 +0000 UTC m=+218.708319384" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.132666 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.133538 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.633522943 +0000 UTC m=+219.277537169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.150018 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hz5vn" podStartSLOduration=154.150001448 podStartE2EDuration="2m34.150001448s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:00.149139933 +0000 UTC m=+218.793154169" watchObservedRunningTime="2026-03-12 14:51:00.150001448 +0000 UTC m=+218.794015674" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.185845 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" podStartSLOduration=154.185824421 podStartE2EDuration="2m34.185824421s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:00.18439281 +0000 UTC m=+218.828407046" watchObservedRunningTime="2026-03-12 14:51:00.185824421 +0000 UTC m=+218.829838647" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.234328 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n84cj"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.234873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.235280 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.735266276 +0000 UTC m=+219.379280502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.239264 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.336115 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.336625 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.836547905 +0000 UTC m=+219.480562131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.336734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.337070 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.8370589 +0000 UTC m=+219.481073126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.401883 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9mvx" podStartSLOduration=155.401869278 podStartE2EDuration="2m35.401869278s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:00.401797566 +0000 UTC m=+219.045811792" watchObservedRunningTime="2026-03-12 14:51:00.401869278 +0000 UTC m=+219.045883504" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.437347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.437770 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:00.937752173 +0000 UTC m=+219.581766399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.539313 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.539593 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.039581838 +0000 UTC m=+219.683596054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.636692 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xfg24" podStartSLOduration=154.636672087 podStartE2EDuration="2m34.636672087s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:00.591716051 +0000 UTC m=+219.235730297" watchObservedRunningTime="2026-03-12 14:51:00.636672087 +0000 UTC m=+219.280686313" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.641517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.645808 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.14578154 +0000 UTC m=+219.789795766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.646031 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.646329 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.146323065 +0000 UTC m=+219.790337291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.673556 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.694164 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.700381 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5h6nt"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.702608 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.704287 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.713582 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-svx7c"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.725460 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ph4hc"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.747157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.747759 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.247727899 +0000 UTC m=+219.891742125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.763595 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.765570 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72"] Mar 12 14:51:00 crc kubenswrapper[4832]: W0312 14:51:00.773900 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd26103_6087_4ded_9197_cd19279c4413.slice/crio-3bb0d4c43dab6aaf821026c7e6033e9488d2d6fcd510b5ff9602e2b76e2d458d WatchSource:0}: Error finding container 3bb0d4c43dab6aaf821026c7e6033e9488d2d6fcd510b5ff9602e2b76e2d458d: Status 404 returned error can't find the container with id 3bb0d4c43dab6aaf821026c7e6033e9488d2d6fcd510b5ff9602e2b76e2d458d Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.777933 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd"] Mar 12 14:51:00 crc kubenswrapper[4832]: W0312 14:51:00.803793 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b8b3d80_e845_47b0_928e_a3faff312e25.slice/crio-d730404c6f4a897507d3279574ed5aaec0bf401546574eb1730cb9d93a94b6cc WatchSource:0}: Error finding container d730404c6f4a897507d3279574ed5aaec0bf401546574eb1730cb9d93a94b6cc: Status 404 returned error can't find the container with id d730404c6f4a897507d3279574ed5aaec0bf401546574eb1730cb9d93a94b6cc Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.849561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.850178 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.350144681 +0000 UTC m=+219.994158907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.857241 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" event={"ID":"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982","Type":"ContainerStarted","Data":"7b989a863204dc51dfd0a749e6753ef4665d2cc6afeee51690c1188cf8535fdf"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.857285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" event={"ID":"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982","Type":"ContainerStarted","Data":"e0f3673cb43f4e3eccb94d7d27682e0cba12c11d699bc20a9666ddcdf4086a6a"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.860185 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:00 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:00 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:00 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.860257 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.873882 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-987hp"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.877443 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" event={"ID":"4cf9358d-29c3-4296-9ed7-740163adbcb8","Type":"ContainerStarted","Data":"560fe273f961db9e256070d420a803c464fd0534ddd4874b73d55114c99db269"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.907739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-62nqf" event={"ID":"1e8afb95-8d48-45d4-88f7-900c0dc949f3","Type":"ContainerStarted","Data":"4710e969971a7c9e8f5c3bf780db8a92ec9b6b33bb0f5607a2c5be1f80fe493a"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.907780 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-62nqf" event={"ID":"1e8afb95-8d48-45d4-88f7-900c0dc949f3","Type":"ContainerStarted","Data":"e3b2c5cee5da8c897dd3006d7e75b90cbb9a790280c133b6a381b186da7e97da"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.932944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" event={"ID":"780e312c-4f87-40d8-b146-0bcefe9c9c89","Type":"ContainerStarted","Data":"ccc6aad94a60e58ca91688487ef9ea09f4a6d47a1b00ed06d6f5c91689a0dc95"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.933222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" event={"ID":"780e312c-4f87-40d8-b146-0bcefe9c9c89","Type":"ContainerStarted","Data":"a265eb655287c8a019b6bfffc77b95e53317093cf1a15580e8b59af3f971ecd9"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.936723 4832 generic.go:334] "Generic (PLEG): container finished" podID="fc57df00-709c-4cee-9d19-a00dca7d58da" containerID="5d3adb4c8bb1e80111c504aedff022da4cecbc042a2011bb0f87a219fed01fbc" exitCode=0 Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.936893 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" event={"ID":"fc57df00-709c-4cee-9d19-a00dca7d58da","Type":"ContainerDied","Data":"5d3adb4c8bb1e80111c504aedff022da4cecbc042a2011bb0f87a219fed01fbc"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.943692 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" podStartSLOduration=154.943675607 podStartE2EDuration="2m34.943675607s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:00.885374137 +0000 UTC m=+219.529388373" watchObservedRunningTime="2026-03-12 14:51:00.943675607 +0000 UTC m=+219.587689823" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.944257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" event={"ID":"44a36bdc-062d-4e68-abb8-4ec20ba3e41b","Type":"ContainerStarted","Data":"2007fe0acd783104904aae3b207985a271a00267d4a746122d47513366c2b6e6"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.946980 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.950950 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:00 crc kubenswrapper[4832]: E0312 14:51:00.951891 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.451255316 +0000 UTC m=+220.095269542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.955271 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-42j9g"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.967533 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb"] Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.969878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" event={"ID":"2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad","Type":"ContainerStarted","Data":"861f1ad16bbdd48f0a6a583e41aa6f1234bd7481e1497ce3cb23bf4c35a48ee4"} Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.970243 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.980849 4832 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-txst6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.980990 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" podUID="2ea7eb5a-39cf-4ff5-a9d3-0621a50f09ad" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 12 14:51:00 crc kubenswrapper[4832]: I0312 14:51:00.985717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" event={"ID":"0bf32718-d22d-4e55-b158-43a02ef6a67f","Type":"ContainerStarted","Data":"631f747bb4cb3108c32e4feb329441df54615304ea6f58d5ca2d889c5b7adb4b"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:00.999159 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" event={"ID":"258f384d-e8e6-410b-acb9-50d871e0d0d6","Type":"ContainerStarted","Data":"53e013767c7a24c92a0ff2a99ec43be98dcad0c6d60e295e76bed1cfb39c7c9f"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:00.999229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" event={"ID":"258f384d-e8e6-410b-acb9-50d871e0d0d6","Type":"ContainerStarted","Data":"48e5357acd1ea6d5117b7528355c405ad7307018669eed1c3b16b68113126454"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.007746 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" event={"ID":"b2442d67-5fd4-4cde-bf16-afc8b174b487","Type":"ContainerStarted","Data":"47e28b46df51e876304f7107c0659b8d8c93ead4253e8616d401e01986a2e293"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.012412 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" event={"ID":"8ecc652e-061a-4e8a-8757-d6eea707acf1","Type":"ContainerStarted","Data":"c81d46f9e00f1113594ffb68392e4ff5d96fa20d6ad9404c5d477007940fdc6d"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.055132 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.055464 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.555450279 +0000 UTC m=+220.199464505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.056464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" event={"ID":"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1","Type":"ContainerStarted","Data":"c88c1ca9598e65e19aa9cb66ec6911adf7c28cfb8c98fceb53bb9e4793fcab93"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.056553 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" event={"ID":"b968d323-a039-4d05-9e1f-1d9d3b0ab1a1","Type":"ContainerStarted","Data":"71af8367e4f8a3b6cbfb96b0866a338a5ad56c1e0b092151440af44f595845d9"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.062942 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" event={"ID":"89f0a9e3-a2f5-44f4-ac7c-e80b5643df2d","Type":"ContainerStarted","Data":"4a79d50e64805212ec6f70ef2f5a6f7c55d54510d7cd56d6f93b4ead73c806be"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.066851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" event={"ID":"8fd26103-6087-4ded-9197-cd19279c4413","Type":"ContainerStarted","Data":"3bb0d4c43dab6aaf821026c7e6033e9488d2d6fcd510b5ff9602e2b76e2d458d"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.107654 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29"] Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.112786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" event={"ID":"86719732-2809-4511-8e2c-9fb82df5c4bc","Type":"ContainerStarted","Data":"264b5fc69b4cd0068e039e23649847ccf17a12ac3df9e1131d343093a79e6a11"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.117523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" event={"ID":"0e48a27d-76e1-45f3-87af-c9b306291d25","Type":"ContainerStarted","Data":"3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.118079 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:51:01 crc kubenswrapper[4832]: W0312 14:51:01.122996 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129d8f6c_5fb6_48ca_b269_c8c17a3a3efe.slice/crio-fb4fe3c5c7f66295d4baf4b86b4646e6629064cd16979e75fd8f032d39a4664d WatchSource:0}: Error finding container fb4fe3c5c7f66295d4baf4b86b4646e6629064cd16979e75fd8f032d39a4664d: Status 404 returned error can't find the container with id fb4fe3c5c7f66295d4baf4b86b4646e6629064cd16979e75fd8f032d39a4664d Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.132730 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj"] Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.132870 4832 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lgx9r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.132933 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" podUID="0e48a27d-76e1-45f3-87af-c9b306291d25" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.154707 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" event={"ID":"d80173e6-eb6e-4671-b61c-f223b0f3dc24","Type":"ContainerStarted","Data":"d61a196358562f3ee6b6cd393228149aa53f4f8c4ef8a7619f5d64087a08335c"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.157869 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.158835 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.658808879 +0000 UTC m=+220.302823145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.161118 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svx7c" event={"ID":"7e3e2355-1e51-4248-8dba-a8f3c45657f9","Type":"ContainerStarted","Data":"458b16808ca58461e684de49b707caecf17191a8c55b75ac99b4f4daa41d5de1"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.177126 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx"] Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.191274 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" event={"ID":"3e627486-0771-4742-90d4-a9166283471f","Type":"ContainerStarted","Data":"681dc512b4593b7ba96987c79debc954bbaece37e3c53afecef4b40667e8e434"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.194569 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gx8zc"] Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.203008 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k6mdb" event={"ID":"35632592-89c3-413c-97d1-da2931f1a778","Type":"ContainerStarted","Data":"b15555baeb824bfd770c32700df387986aff0aae2161b0fed070f0f0f93c44a5"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.218877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" event={"ID":"bb097132-cc58-4d48-82c5-1e9f0fc0d967","Type":"ContainerStarted","Data":"ebfc8dfa209901fa8c3001d8d080fcfa31033f97b64bd9b059c405fae5e2d8a5"} Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.219872 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz5vn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.219922 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hz5vn" podUID="5d4068e0-53ed-433d-9657-ff75730d43a6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.230313 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.244346 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.261377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.263064 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.763046234 +0000 UTC m=+220.407060520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: W0312 14:51:01.285626 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod244172fe_e44e_4f8f_86d5_69f70a7c5dd0.slice/crio-1cac26365371b79149b15eec5542ac9eaddae204e206d4f5d04ca0a17bde38a3 WatchSource:0}: Error finding container 1cac26365371b79149b15eec5542ac9eaddae204e206d4f5d04ca0a17bde38a3: Status 404 returned error can't find the container with id 1cac26365371b79149b15eec5542ac9eaddae204e206d4f5d04ca0a17bde38a3 Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.288907 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gjmmz"] Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.313673 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xfg24" Mar 12 14:51:01 crc kubenswrapper[4832]: W0312 14:51:01.313803 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod579d6f1b_e8f5_4d51_9527_41988322b007.slice/crio-9cf844cc6b800720080a01fd4849d202269d865dc4ffe50742cb25073158c32a WatchSource:0}: Error finding container 9cf844cc6b800720080a01fd4849d202269d865dc4ffe50742cb25073158c32a: Status 404 returned error can't find the container with id 9cf844cc6b800720080a01fd4849d202269d865dc4ffe50742cb25073158c32a Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.322630 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5g46q" podStartSLOduration=155.32258933 podStartE2EDuration="2m35.32258933s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.304803917 +0000 UTC m=+219.948818143" watchObservedRunningTime="2026-03-12 14:51:01.32258933 +0000 UTC m=+219.966603576" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.364995 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.365240 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2jc2t" podStartSLOduration=155.365219749 podStartE2EDuration="2m35.365219749s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.364400025 +0000 UTC m=+220.008414251" watchObservedRunningTime="2026-03-12 14:51:01.365219749 +0000 UTC m=+220.009233975" Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.366524 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.866484565 +0000 UTC m=+220.510498791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.432430 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-62nqf" podStartSLOduration=5.432411596 podStartE2EDuration="5.432411596s" podCreationTimestamp="2026-03-12 14:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.432037025 +0000 UTC m=+220.076051251" watchObservedRunningTime="2026-03-12 14:51:01.432411596 +0000 UTC m=+220.076425822" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.440013 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" podStartSLOduration=156.439994955 podStartE2EDuration="2m36.439994955s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.405411248 +0000 UTC m=+220.049425474" watchObservedRunningTime="2026-03-12 14:51:01.439994955 +0000 UTC m=+220.084009181" Mar 12 14:51:01 crc kubenswrapper[4832]: W0312 14:51:01.457239 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24defe84_7690_4b69_b9db_5ee531d7f725.slice/crio-185a1736015b7b4417f8b68e691efc1d3aa3fdcda9cf7c442e2d803e252e142e WatchSource:0}: Error finding container 185a1736015b7b4417f8b68e691efc1d3aa3fdcda9cf7c442e2d803e252e142e: Status 404 returned error can't find the container with id 185a1736015b7b4417f8b68e691efc1d3aa3fdcda9cf7c442e2d803e252e142e Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.467273 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.467741 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:01.967722954 +0000 UTC m=+220.611737170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.476378 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" podStartSLOduration=155.476355323 podStartE2EDuration="2m35.476355323s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.474095398 +0000 UTC m=+220.118109634" watchObservedRunningTime="2026-03-12 14:51:01.476355323 +0000 UTC m=+220.120369549" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.514861 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t2c67" podStartSLOduration=156.514845972 podStartE2EDuration="2m36.514845972s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.512567677 +0000 UTC m=+220.156581993" watchObservedRunningTime="2026-03-12 14:51:01.514845972 +0000 UTC m=+220.158860198" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.568129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.568546 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.0685303 +0000 UTC m=+220.712544526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.617493 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" podStartSLOduration=155.617475031 podStartE2EDuration="2m35.617475031s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.615195675 +0000 UTC m=+220.259209911" watchObservedRunningTime="2026-03-12 14:51:01.617475031 +0000 UTC m=+220.261489257" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.672480 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.677171 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.177154441 +0000 UTC m=+220.821168677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.719736 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2mb5" podStartSLOduration=155.719717938 podStartE2EDuration="2m35.719717938s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.690187737 +0000 UTC m=+220.334201963" watchObservedRunningTime="2026-03-12 14:51:01.719717938 +0000 UTC m=+220.363732164" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.754021 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xj2sf" podStartSLOduration=155.754006077 podStartE2EDuration="2m35.754006077s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.752617277 +0000 UTC m=+220.396631503" watchObservedRunningTime="2026-03-12 14:51:01.754006077 +0000 UTC m=+220.398020303" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.778857 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.779258 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.279243514 +0000 UTC m=+220.923257740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.797476 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-k6mdb" podStartSLOduration=5.797461359 podStartE2EDuration="5.797461359s" podCreationTimestamp="2026-03-12 14:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.796012558 +0000 UTC m=+220.440026784" watchObservedRunningTime="2026-03-12 14:51:01.797461359 +0000 UTC m=+220.441475585" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.858733 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:01 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:01 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:01 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.858789 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.883113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.883525 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.38349435 +0000 UTC m=+221.027508666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:01 crc kubenswrapper[4832]: I0312 14:51:01.983866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:01 crc kubenswrapper[4832]: E0312 14:51:01.989467 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.489446994 +0000 UTC m=+221.133461220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.091182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.091575 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.591562548 +0000 UTC m=+221.235576774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.193319 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.193758 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.693742224 +0000 UTC m=+221.337756450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.241599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" event={"ID":"8ecc652e-061a-4e8a-8757-d6eea707acf1","Type":"ContainerStarted","Data":"77039d5ec5354c11af4415881eae1b189c0268f7ca5b3fe6600cd2d8c607bee5"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.263781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" event={"ID":"24defe84-7690-4b69-b9db-5ee531d7f725","Type":"ContainerStarted","Data":"185a1736015b7b4417f8b68e691efc1d3aa3fdcda9cf7c442e2d803e252e142e"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.274914 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-256sp" podStartSLOduration=156.274897523 podStartE2EDuration="2m36.274897523s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.273878484 +0000 UTC m=+220.917892710" watchObservedRunningTime="2026-03-12 14:51:02.274897523 +0000 UTC m=+220.918911749" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.281607 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" event={"ID":"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe","Type":"ContainerStarted","Data":"fb4fe3c5c7f66295d4baf4b86b4646e6629064cd16979e75fd8f032d39a4664d"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.295218 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" event={"ID":"6e84ddfa-88f9-4e3b-9708-65796373121b","Type":"ContainerStarted","Data":"906e62146fc4bbd5f2e1784b899e7b240c4a74d779dee42b2324739c66e083c1"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.295261 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" event={"ID":"6e84ddfa-88f9-4e3b-9708-65796373121b","Type":"ContainerStarted","Data":"43cafb628944ebe2490d1857e9c18e0b04b4598b0bfb31ba3dccbbbea759268a"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.296490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.296798 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.796788054 +0000 UTC m=+221.440802280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.307089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" event={"ID":"0bf32718-d22d-4e55-b158-43a02ef6a67f","Type":"ContainerStarted","Data":"419df817519c7b2b9f1e806963b4f568797fe722156998155b51647be6b23668"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.315925 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" event={"ID":"fc63079e-bbae-4de6-b756-e23a6df3f250","Type":"ContainerStarted","Data":"9476d70d1bd1663eb944bdd115aae8d0424577c4ae7f62104748d853611b312b"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.315970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" event={"ID":"fc63079e-bbae-4de6-b756-e23a6df3f250","Type":"ContainerStarted","Data":"3e80c22418322d32eb9971392a0fe5facaef0e333982093722fdb341b442c818"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.316065 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jm6kd" podStartSLOduration=156.316050419 podStartE2EDuration="2m36.316050419s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.315349779 +0000 UTC m=+220.959364025" watchObservedRunningTime="2026-03-12 14:51:02.316050419 +0000 UTC m=+220.960064645" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.344843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" event={"ID":"4cf9358d-29c3-4296-9ed7-740163adbcb8","Type":"ContainerStarted","Data":"f1d28ab7e320cf217edaa44eec3987dcacd575d4c77d20a9466bd4752e84d378"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.357774 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" podStartSLOduration=157.357751522 podStartE2EDuration="2m37.357751522s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.345821518 +0000 UTC m=+220.989835744" watchObservedRunningTime="2026-03-12 14:51:02.357751522 +0000 UTC m=+221.001765748" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.381750 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wj5jt" podStartSLOduration=156.381732143 podStartE2EDuration="2m36.381732143s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.380700453 +0000 UTC m=+221.024714689" watchObservedRunningTime="2026-03-12 14:51:02.381732143 +0000 UTC m=+221.025746369" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.386868 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" event={"ID":"31435028-adc4-4b77-85d3-5d7659cd80f0","Type":"ContainerStarted","Data":"fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.386912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" event={"ID":"31435028-adc4-4b77-85d3-5d7659cd80f0","Type":"ContainerStarted","Data":"a874895a1d7d77115fe598cc50f027d73dca9e0892502f388992056228614caf"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.387856 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.388562 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ph4hc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.388592 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.397097 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.398358 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:02.898343802 +0000 UTC m=+221.542358028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.407309 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" event={"ID":"d1ff5656-430d-4071-9a26-ce6bf8ec844b","Type":"ContainerStarted","Data":"79130ff2c1bf26a183c4f1965c3bd43378c865fb71e6904d4b9ac38d8d203c10"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.416165 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" event={"ID":"d80173e6-eb6e-4671-b61c-f223b0f3dc24","Type":"ContainerStarted","Data":"c09687272f1077ed2069447557e5f6d150b1cd300fa502775a4cf799e469286b"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.429682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" event={"ID":"5b8b3d80-e845-47b0-928e-a3faff312e25","Type":"ContainerStarted","Data":"b90e736006eefb4a91b60f1f32925a7f60f8a4f8fe671a9529261f389c439c63"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.429724 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" event={"ID":"5b8b3d80-e845-47b0-928e-a3faff312e25","Type":"ContainerStarted","Data":"d730404c6f4a897507d3279574ed5aaec0bf401546574eb1730cb9d93a94b6cc"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.451115 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" podStartSLOduration=156.451098843 podStartE2EDuration="2m36.451098843s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.421240472 +0000 UTC m=+221.065254708" watchObservedRunningTime="2026-03-12 14:51:02.451098843 +0000 UTC m=+221.095113069" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.456350 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svx7c" event={"ID":"7e3e2355-1e51-4248-8dba-a8f3c45657f9","Type":"ContainerStarted","Data":"66c55be5f07b55a8307e7a30dfa0ed1764a3607c315ea6b726f3010443766350"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.465338 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" event={"ID":"faef0d2c-43e3-4bc4-92a1-e5c6b08cd982","Type":"ContainerStarted","Data":"8f78995252c95f08e64b4030e086626b5acf88216fca63139bde7e2be3bc69be"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.465985 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.473678 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" podStartSLOduration=157.473662613 podStartE2EDuration="2m37.473662613s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.454008616 +0000 UTC m=+221.098022862" watchObservedRunningTime="2026-03-12 14:51:02.473662613 +0000 UTC m=+221.117676839" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.473999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" event={"ID":"4519b556-4bf4-4c0a-a3a7-d7441728444d","Type":"ContainerStarted","Data":"97dc7258c037c1b5986c674f3c5828d1fbcac57b663f34e8f353772839ccd580"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.474058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" event={"ID":"4519b556-4bf4-4c0a-a3a7-d7441728444d","Type":"ContainerStarted","Data":"3ed242d467c44d5d3876c06dbb34d344928f61f1b85c856e321b7d418288e4d1"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.474679 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5h6nt" podStartSLOduration=156.474673612 podStartE2EDuration="2m36.474673612s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.472607993 +0000 UTC m=+221.116622219" watchObservedRunningTime="2026-03-12 14:51:02.474673612 +0000 UTC m=+221.118687838" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.495554 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" event={"ID":"8fd26103-6087-4ded-9197-cd19279c4413","Type":"ContainerStarted","Data":"eccdfb2b6809250521702d1d3af9bcd9eb8fa315981bae902335fc78ae504960"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.496475 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.497446 4832 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-grxzk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.497472 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" podUID="8fd26103-6087-4ded-9197-cd19279c4413" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.507720 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" podStartSLOduration=156.507707955 podStartE2EDuration="2m36.507707955s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.505204932 +0000 UTC m=+221.149219158" watchObservedRunningTime="2026-03-12 14:51:02.507707955 +0000 UTC m=+221.151722181" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.512758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" event={"ID":"a9d5c80a-fef6-4eae-a1e9-951f2d72647b","Type":"ContainerStarted","Data":"d2add73cdfa163c2292e7522c5863a18c662eabad8d2698564a3506e7fde28bd"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.513560 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.515769 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.015754646 +0000 UTC m=+221.659768872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.524742 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" event={"ID":"244172fe-e44e-4f8f-86d5-69f70a7c5dd0","Type":"ContainerStarted","Data":"1cac26365371b79149b15eec5542ac9eaddae204e206d4f5d04ca0a17bde38a3"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.544659 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-42j9g" event={"ID":"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06","Type":"ContainerStarted","Data":"cd4276755d6479f784ca5f8b8ffac1b8497cc93ee54153e41cc18fd9a5bcf766"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.560228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" event={"ID":"9bcdaf53-8f1a-4748-96bc-721dc6b821fc","Type":"ContainerStarted","Data":"940034c93ec6ab462f598db0c3ee7e8af7059b50c12cba619735b88f0f52933a"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.581279 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" podStartSLOduration=156.581263675 podStartE2EDuration="2m36.581263675s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.551816336 +0000 UTC m=+221.195830562" watchObservedRunningTime="2026-03-12 14:51:02.581263675 +0000 UTC m=+221.225277901" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.600849 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" event={"ID":"44a36bdc-062d-4e68-abb8-4ec20ba3e41b","Type":"ContainerStarted","Data":"aabe5a21b996506197b46b76d28432a3f09d34bf3621e74aea0ca9b31c3e3ea7"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.601726 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.604838 4832 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d8ntm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.604886 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" podUID="44a36bdc-062d-4e68-abb8-4ec20ba3e41b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.628736 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.639318 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" event={"ID":"579d6f1b-e8f5-4d51-9527-41988322b007","Type":"ContainerStarted","Data":"9cf844cc6b800720080a01fd4849d202269d865dc4ffe50742cb25073158c32a"} Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.639744 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.13972166 +0000 UTC m=+221.783735946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.646916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" event={"ID":"3e627486-0771-4742-90d4-a9166283471f","Type":"ContainerStarted","Data":"27b1e992e3c8631fcd76b175d959feba08b4d5e2f1c3761f9a378a677147872d"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.659114 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-987hp" podStartSLOduration=157.659094499 podStartE2EDuration="2m37.659094499s" podCreationTimestamp="2026-03-12 14:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.581681757 +0000 UTC m=+221.225695983" watchObservedRunningTime="2026-03-12 14:51:02.659094499 +0000 UTC m=+221.303108725" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.661381 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" podStartSLOduration=156.661367144 podStartE2EDuration="2m36.661367144s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:02.658019738 +0000 UTC m=+221.302033964" watchObservedRunningTime="2026-03-12 14:51:02.661367144 +0000 UTC m=+221.305381370" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.669287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" event={"ID":"258f384d-e8e6-410b-acb9-50d871e0d0d6","Type":"ContainerStarted","Data":"c5e563563dce228344e15a2ea48e146a82073c8b1188641ff89539e3b0892e38"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.716473 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" event={"ID":"d0479cc4-afec-46e5-9472-e82716b4e9b6","Type":"ContainerStarted","Data":"426dade2868b21fe78ec30f575c66881b65d5258bb9ae718d3aae3eb80ed61a6"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.743221 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.744292 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.244281535 +0000 UTC m=+221.888295761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.757410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" event={"ID":"fc57df00-709c-4cee-9d19-a00dca7d58da","Type":"ContainerStarted","Data":"8e401ceb0fa86886d45326461f1fadd1c549c4d4dd1e5e0f7a5ea98facf5f7e8"} Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.757446 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.773459 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-txst6" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.778927 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.846167 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.847650 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.347622784 +0000 UTC m=+221.991637010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.857872 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:02 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:02 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:02 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.857918 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:02 crc kubenswrapper[4832]: I0312 14:51:02.950184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:02 crc kubenswrapper[4832]: E0312 14:51:02.951947 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.451931121 +0000 UTC m=+222.095945337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.051325 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.051852 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.551840041 +0000 UTC m=+222.195854267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.158772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.159173 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.659158715 +0000 UTC m=+222.303172941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.162376 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmvjl" podStartSLOduration=157.162356477 podStartE2EDuration="2m37.162356477s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:03.159803643 +0000 UTC m=+221.803817879" watchObservedRunningTime="2026-03-12 14:51:03.162356477 +0000 UTC m=+221.806370703" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.260266 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.260652 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.76063666 +0000 UTC m=+222.404650886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.361668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.362230 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.862218919 +0000 UTC m=+222.506233145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.378295 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.378345 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.382757 4832 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8x4hd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.382812 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" podUID="d80173e6-eb6e-4671-b61c-f223b0f3dc24" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.389633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.389684 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.422450 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" podStartSLOduration=157.422434755 podStartE2EDuration="2m37.422434755s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:03.420459848 +0000 UTC m=+222.064474074" watchObservedRunningTime="2026-03-12 14:51:03.422434755 +0000 UTC m=+222.066448981" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.454055 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.462948 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.463277 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:03.963263412 +0000 UTC m=+222.607277638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.564078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.564400 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.064387247 +0000 UTC m=+222.708401473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.665135 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.665516 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.165489191 +0000 UTC m=+222.809503417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.766956 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.767447 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.26743137 +0000 UTC m=+222.911445596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.776629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" event={"ID":"fc63079e-bbae-4de6-b756-e23a6df3f250","Type":"ContainerStarted","Data":"2384ad7e2a9d0a8803ea1d53d9c98810c7fc7d974ccaabf1f2092dce00f0cedd"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.787035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-42j9g" event={"ID":"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06","Type":"ContainerStarted","Data":"44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.801763 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j2cpq" podStartSLOduration=157.801748589 podStartE2EDuration="2m37.801748589s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:03.799951978 +0000 UTC m=+222.443966214" watchObservedRunningTime="2026-03-12 14:51:03.801748589 +0000 UTC m=+222.445762815" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.804028 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" event={"ID":"24defe84-7690-4b69-b9db-5ee531d7f725","Type":"ContainerStarted","Data":"d5e5a42c125ffcb3c14ccf2c5764eea1669456b0fc092be0ae6e1501b0bbdfcb"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.804061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" event={"ID":"24defe84-7690-4b69-b9db-5ee531d7f725","Type":"ContainerStarted","Data":"9c7c1a8739b656b9022504b1c513a27dcff51a043863ec458b16483017c7727c"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.809884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" event={"ID":"d0479cc4-afec-46e5-9472-e82716b4e9b6","Type":"ContainerStarted","Data":"662f169096b02bd34ca254ec72cd2f9610e74a772ca197e4be827dbef1b0fdf5"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.816631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" event={"ID":"3e627486-0771-4742-90d4-a9166283471f","Type":"ContainerStarted","Data":"0ca61bac2a6f93068ba1bec64f06cadc49bf52e53df3f4255ddf71d64a12dca8"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.824060 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-42j9g" podStartSLOduration=157.824043642 podStartE2EDuration="2m37.824043642s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:03.821924901 +0000 UTC m=+222.465939147" watchObservedRunningTime="2026-03-12 14:51:03.824043642 +0000 UTC m=+222.468057868" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.826020 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" event={"ID":"244172fe-e44e-4f8f-86d5-69f70a7c5dd0","Type":"ContainerStarted","Data":"ab07c4a969870c3ada22fc34ad3ac3e6ceb7d51b2b6af1ef7947571833c0e489"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.826085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" event={"ID":"244172fe-e44e-4f8f-86d5-69f70a7c5dd0","Type":"ContainerStarted","Data":"24442de2f9806211c62f5815d20b7e3a232fd34c5527e31cd80d091445008479"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.841539 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svx7c" event={"ID":"7e3e2355-1e51-4248-8dba-a8f3c45657f9","Type":"ContainerStarted","Data":"a98e1e171afa1a6dbd6764b2e2892be1c5a4bce18a77ec87e9608f893dbfc413"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.841857 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-svx7c" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.856573 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:03 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:03 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:03 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.856627 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.863792 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" event={"ID":"9bcdaf53-8f1a-4748-96bc-721dc6b821fc","Type":"ContainerStarted","Data":"310d7c797ffef59e4b22ca81cb2030d40169e727a52f1a94ece3818c206b94da"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.868271 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.868386 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.368363639 +0000 UTC m=+223.012377865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.869385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.869659 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.369651886 +0000 UTC m=+223.013666102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.879177 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" event={"ID":"129d8f6c-5fb6-48ca-b269-c8c17a3a3efe","Type":"ContainerStarted","Data":"9fa19ea737bc3887d310a0d296548b428bc41cff73b138bd40e268e4757a4784"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.892912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" event={"ID":"d1ff5656-430d-4071-9a26-ce6bf8ec844b","Type":"ContainerStarted","Data":"f8bcd2925063f75db87af5475763cc6bea6c537d9f3da38efdcd3f35a72b58d1"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.892966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" event={"ID":"d1ff5656-430d-4071-9a26-ce6bf8ec844b","Type":"ContainerStarted","Data":"ff1c8c3ed9f5451e10dc1bd9e6ff2daef7e443ebe222df1812ec739e35fab11e"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.912306 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n84cj" podStartSLOduration=157.912286665 podStartE2EDuration="2m37.912286665s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:03.839938249 +0000 UTC m=+222.483952475" watchObservedRunningTime="2026-03-12 14:51:03.912286665 +0000 UTC m=+222.556300901" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.923136 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" event={"ID":"a9d5c80a-fef6-4eae-a1e9-951f2d72647b","Type":"ContainerStarted","Data":"fbdee0d79f69be390f78fcd2c88385098ba1859bf8526abf97feb6df3af3cf27"} Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.925803 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ph4hc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.925859 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.955224 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8ntm" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.965049 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-spzsp" Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.978764 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.979619 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gjmmz" podStartSLOduration=157.979602026 podStartE2EDuration="2m37.979602026s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:03.977993129 +0000 UTC m=+222.622007345" watchObservedRunningTime="2026-03-12 14:51:03.979602026 +0000 UTC m=+222.623616272" Mar 12 14:51:03 crc kubenswrapper[4832]: E0312 14:51:03.980302 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.480279145 +0000 UTC m=+223.124293371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:03 crc kubenswrapper[4832]: I0312 14:51:03.996466 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8xd72" podStartSLOduration=157.996446761 podStartE2EDuration="2m37.996446761s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:03.935313119 +0000 UTC m=+222.579327345" watchObservedRunningTime="2026-03-12 14:51:03.996446761 +0000 UTC m=+222.640460987" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.052097 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dvb29" podStartSLOduration=158.052077745 podStartE2EDuration="2m38.052077745s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:04.019773104 +0000 UTC m=+222.663787330" watchObservedRunningTime="2026-03-12 14:51:04.052077745 +0000 UTC m=+222.696091971" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.052581 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cdjkj" podStartSLOduration=158.052577319 podStartE2EDuration="2m38.052577319s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:04.050732056 +0000 UTC m=+222.694746282" watchObservedRunningTime="2026-03-12 14:51:04.052577319 +0000 UTC m=+222.696591535" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.081379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.113958 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.613944068 +0000 UTC m=+223.257958294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.191993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.192402 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.69238472 +0000 UTC m=+223.336398946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.193481 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5c222" podStartSLOduration=158.193471781 podStartE2EDuration="2m38.193471781s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:04.188809397 +0000 UTC m=+222.832823643" watchObservedRunningTime="2026-03-12 14:51:04.193471781 +0000 UTC m=+222.837486007" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.193604 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dc9cx" podStartSLOduration=158.193600615 podStartE2EDuration="2m38.193600615s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:04.149691149 +0000 UTC m=+222.793705375" watchObservedRunningTime="2026-03-12 14:51:04.193600615 +0000 UTC m=+222.837614841" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.204314 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42396: no serving certificate available for the kubelet" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.294111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.294388 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.79437086 +0000 UTC m=+223.438385086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.305144 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-svx7c" podStartSLOduration=8.30512626 podStartE2EDuration="8.30512626s" podCreationTimestamp="2026-03-12 14:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:04.279420119 +0000 UTC m=+222.923434345" watchObservedRunningTime="2026-03-12 14:51:04.30512626 +0000 UTC m=+222.949140486" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.337652 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42410: no serving certificate available for the kubelet" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.382482 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp6fb" podStartSLOduration=158.382466779 podStartE2EDuration="2m38.382466779s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:04.306687585 +0000 UTC m=+222.950701811" watchObservedRunningTime="2026-03-12 14:51:04.382466779 +0000 UTC m=+223.026481005" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.394758 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.394887 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.894869247 +0000 UTC m=+223.538883463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.395036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.395338 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.89533096 +0000 UTC m=+223.539345186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.451972 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42426: no serving certificate available for the kubelet" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.496673 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.496785 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.996768335 +0000 UTC m=+223.640782561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.497063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.497342 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:04.997334851 +0000 UTC m=+223.641349077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.541743 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42442: no serving certificate available for the kubelet" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.598694 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.599065 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.099050743 +0000 UTC m=+223.743064969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.640900 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42444: no serving certificate available for the kubelet" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.699896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.700197 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.200183309 +0000 UTC m=+223.844197535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.742863 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42446: no serving certificate available for the kubelet" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.801387 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.801576 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.301548921 +0000 UTC m=+223.945563147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.801757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.802138 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.302123207 +0000 UTC m=+223.946137483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.861413 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:04 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:04 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:04 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.861479 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.879541 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42448: no serving certificate available for the kubelet" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.902343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.902596 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.402553903 +0000 UTC m=+224.046568129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.902814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:04 crc kubenswrapper[4832]: E0312 14:51:04.903136 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.403122469 +0000 UTC m=+224.047136745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.925716 4832 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-grxzk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.925829 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" podUID="8fd26103-6087-4ded-9197-cd19279c4413" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.968954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" event={"ID":"579d6f1b-e8f5-4d51-9527-41988322b007","Type":"ContainerStarted","Data":"16c42aa74c319b68f91551d60f93edf1fe2f1259dccc241965bfbed898e5b92c"} Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.971844 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ph4hc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.971883 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 12 14:51:04 crc kubenswrapper[4832]: I0312 14:51:04.993391 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42464: no serving certificate available for the kubelet" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.004381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.004677 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.504655756 +0000 UTC m=+224.148669992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.004838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.005151 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.50514226 +0000 UTC m=+224.149156496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.106530 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.106913 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.606886513 +0000 UTC m=+224.250900739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.109001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.109976 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.609962692 +0000 UTC m=+224.253976918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.175952 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qbwsc"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.176983 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.180357 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.212882 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.213027 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.713001702 +0000 UTC m=+224.357015928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.213174 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.213546 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.713531138 +0000 UTC m=+224.357545364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.235593 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qbwsc"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.314865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.315091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-utilities\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.315210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-catalog-content\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.315237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fzc\" (UniqueName: \"kubernetes.io/projected/7fdc1c63-8a73-405f-aede-75834651cccc-kube-api-access-59fzc\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.315392 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.815373594 +0000 UTC m=+224.459387830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.381104 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xql8n"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.382256 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.390656 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.398494 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grxzk" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.400380 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xql8n"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.416536 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-catalog-content\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.416567 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fzc\" (UniqueName: \"kubernetes.io/projected/7fdc1c63-8a73-405f-aede-75834651cccc-kube-api-access-59fzc\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.416639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-utilities\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.416664 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.416952 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:05.916938472 +0000 UTC m=+224.560952698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.417529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-catalog-content\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.417817 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-utilities\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.455918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fzc\" (UniqueName: \"kubernetes.io/projected/7fdc1c63-8a73-405f-aede-75834651cccc-kube-api-access-59fzc\") pod \"certified-operators-qbwsc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.496704 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.517160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.517484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-catalog-content\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.517552 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.01748675 +0000 UTC m=+224.661500976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.517586 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-utilities\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.517645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmcgq\" (UniqueName: \"kubernetes.io/projected/17368088-aec0-4319-8575-045b54487a1f-kube-api-access-xmcgq\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.568958 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gc4cj"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.570428 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.583300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gc4cj"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.618538 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-catalog-content\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.618583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-utilities\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.618639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmcgq\" (UniqueName: \"kubernetes.io/projected/17368088-aec0-4319-8575-045b54487a1f-kube-api-access-xmcgq\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.618747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.619084 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-catalog-content\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.619105 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.119091609 +0000 UTC m=+224.763105835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.619442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-utilities\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.654302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmcgq\" (UniqueName: \"kubernetes.io/projected/17368088-aec0-4319-8575-045b54487a1f-kube-api-access-xmcgq\") pod \"community-operators-xql8n\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.695808 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.700984 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42478: no serving certificate available for the kubelet" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.719681 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.719848 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.219821343 +0000 UTC m=+224.863835559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.720009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727pv\" (UniqueName: \"kubernetes.io/projected/28ad10d5-8a9a-418b-af56-da46474279fe-kube-api-access-727pv\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.720107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-catalog-content\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.720212 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.720276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-utilities\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.720544 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.220484492 +0000 UTC m=+224.864498718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.778424 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9n2m2"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.779882 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.791681 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9n2m2"] Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.823273 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.823533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-catalog-content\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.823599 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-utilities\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.823641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-727pv\" (UniqueName: \"kubernetes.io/projected/28ad10d5-8a9a-418b-af56-da46474279fe-kube-api-access-727pv\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.823957 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.323943405 +0000 UTC m=+224.967957631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.824656 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-catalog-content\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.824861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-utilities\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.854948 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-727pv\" (UniqueName: \"kubernetes.io/projected/28ad10d5-8a9a-418b-af56-da46474279fe-kube-api-access-727pv\") pod \"certified-operators-gc4cj\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.856884 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:05 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:05 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:05 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.856946 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.895164 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.924903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrbm\" (UniqueName: \"kubernetes.io/projected/bb45efb5-4239-4b47-9664-12fd61be0894-kube-api-access-nxrbm\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.924940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-utilities\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.925010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-catalog-content\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.925037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:05 crc kubenswrapper[4832]: E0312 14:51:05.925322 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.425311927 +0000 UTC m=+225.069326153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:05 crc kubenswrapper[4832]: I0312 14:51:05.960057 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qbwsc"] Mar 12 14:51:05 crc kubenswrapper[4832]: W0312 14:51:05.992995 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fdc1c63_8a73_405f_aede_75834651cccc.slice/crio-e0f8b68fd71abea03263ac1818eee27790415b3cceb52d56a13401421334b60b WatchSource:0}: Error finding container e0f8b68fd71abea03263ac1818eee27790415b3cceb52d56a13401421334b60b: Status 404 returned error can't find the container with id e0f8b68fd71abea03263ac1818eee27790415b3cceb52d56a13401421334b60b Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.022348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" event={"ID":"579d6f1b-e8f5-4d51-9527-41988322b007","Type":"ContainerStarted","Data":"fdc362be4730cf1a61e6d0b1a6f982e33eb8b8158f3af90775aea0ec9498255a"} Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.028953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.029286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrbm\" (UniqueName: \"kubernetes.io/projected/bb45efb5-4239-4b47-9664-12fd61be0894-kube-api-access-nxrbm\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.029326 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-utilities\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.029391 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-catalog-content\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.030924 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.530904031 +0000 UTC m=+225.174918257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.032940 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-catalog-content\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.033002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-utilities\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.050956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrbm\" (UniqueName: \"kubernetes.io/projected/bb45efb5-4239-4b47-9664-12fd61be0894-kube-api-access-nxrbm\") pod \"community-operators-9n2m2\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.132448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.133436 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.633421967 +0000 UTC m=+225.277436193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.137606 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86f5t"] Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.137923 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" podUID="1276d8a9-5af1-4a3f-a61c-255ed424ee88" containerName="controller-manager" containerID="cri-o://979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5" gracePeriod=30 Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.158906 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp"] Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.159108 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" podUID="ef9bd599-747b-470d-941b-fe7d6ee15be1" containerName="route-controller-manager" containerID="cri-o://7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583" gracePeriod=30 Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.184755 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.233466 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.233764 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.733749269 +0000 UTC m=+225.377763495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.288471 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xql8n"] Mar 12 14:51:06 crc kubenswrapper[4832]: W0312 14:51:06.319870 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17368088_aec0_4319_8575_045b54487a1f.slice/crio-5a8254018cef9fa4d972ebbeaae47a499e5ac0277b97edde5f18d82471cef533 WatchSource:0}: Error finding container 5a8254018cef9fa4d972ebbeaae47a499e5ac0277b97edde5f18d82471cef533: Status 404 returned error can't find the container with id 5a8254018cef9fa4d972ebbeaae47a499e5ac0277b97edde5f18d82471cef533 Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.334581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.334850 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.834839143 +0000 UTC m=+225.478853369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.390257 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gc4cj"] Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.436129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.436491 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.936472283 +0000 UTC m=+225.580486509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.436741 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.437009 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:06.937002998 +0000 UTC m=+225.581017224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.540645 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.541005 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:07.040989386 +0000 UTC m=+225.685003612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.617736 4832 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.642775 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.643173 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:07.143161031 +0000 UTC m=+225.787175257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.666188 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.737181 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.746001 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-config\") pod \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.746047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-client-ca\") pod \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.746093 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-proxy-ca-bundles\") pod \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.746230 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.746263 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9zr7\" (UniqueName: \"kubernetes.io/projected/1276d8a9-5af1-4a3f-a61c-255ed424ee88-kube-api-access-f9zr7\") pod \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.746309 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1276d8a9-5af1-4a3f-a61c-255ed424ee88-serving-cert\") pod \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\" (UID: \"1276d8a9-5af1-4a3f-a61c-255ed424ee88\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.746966 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-config" (OuterVolumeSpecName: "config") pod "1276d8a9-5af1-4a3f-a61c-255ed424ee88" (UID: "1276d8a9-5af1-4a3f-a61c-255ed424ee88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.747272 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1276d8a9-5af1-4a3f-a61c-255ed424ee88" (UID: "1276d8a9-5af1-4a3f-a61c-255ed424ee88"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.747483 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-client-ca" (OuterVolumeSpecName: "client-ca") pod "1276d8a9-5af1-4a3f-a61c-255ed424ee88" (UID: "1276d8a9-5af1-4a3f-a61c-255ed424ee88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.747560 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 14:51:07.247546681 +0000 UTC m=+225.891560907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.761352 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1276d8a9-5af1-4a3f-a61c-255ed424ee88-kube-api-access-f9zr7" (OuterVolumeSpecName: "kube-api-access-f9zr7") pod "1276d8a9-5af1-4a3f-a61c-255ed424ee88" (UID: "1276d8a9-5af1-4a3f-a61c-255ed424ee88"). InnerVolumeSpecName "kube-api-access-f9zr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.764710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1276d8a9-5af1-4a3f-a61c-255ed424ee88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1276d8a9-5af1-4a3f-a61c-255ed424ee88" (UID: "1276d8a9-5af1-4a3f-a61c-255ed424ee88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.847865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mg44\" (UniqueName: \"kubernetes.io/projected/ef9bd599-747b-470d-941b-fe7d6ee15be1-kube-api-access-8mg44\") pod \"ef9bd599-747b-470d-941b-fe7d6ee15be1\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.847970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-config\") pod \"ef9bd599-747b-470d-941b-fe7d6ee15be1\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-client-ca\") pod \"ef9bd599-747b-470d-941b-fe7d6ee15be1\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848219 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9bd599-747b-470d-941b-fe7d6ee15be1-serving-cert\") pod \"ef9bd599-747b-470d-941b-fe7d6ee15be1\" (UID: \"ef9bd599-747b-470d-941b-fe7d6ee15be1\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848487 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848525 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848534 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1276d8a9-5af1-4a3f-a61c-255ed424ee88-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848545 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9zr7\" (UniqueName: \"kubernetes.io/projected/1276d8a9-5af1-4a3f-a61c-255ed424ee88-kube-api-access-f9zr7\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.848555 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1276d8a9-5af1-4a3f-a61c-255ed424ee88-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: E0312 14:51:06.848789 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 14:51:07.348777529 +0000 UTC m=+225.992791755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4tg76" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.851200 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef9bd599-747b-470d-941b-fe7d6ee15be1" (UID: "ef9bd599-747b-470d-941b-fe7d6ee15be1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.853230 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-config" (OuterVolumeSpecName: "config") pod "ef9bd599-747b-470d-941b-fe7d6ee15be1" (UID: "ef9bd599-747b-470d-941b-fe7d6ee15be1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.857752 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef9bd599-747b-470d-941b-fe7d6ee15be1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef9bd599-747b-470d-941b-fe7d6ee15be1" (UID: "ef9bd599-747b-470d-941b-fe7d6ee15be1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.858757 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:06 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:06 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:06 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.858802 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.866954 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9bd599-747b-470d-941b-fe7d6ee15be1-kube-api-access-8mg44" (OuterVolumeSpecName: "kube-api-access-8mg44") pod "ef9bd599-747b-470d-941b-fe7d6ee15be1" (UID: "ef9bd599-747b-470d-941b-fe7d6ee15be1"). InnerVolumeSpecName "kube-api-access-8mg44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.883023 4832 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-12T14:51:06.618072718Z","Handler":null,"Name":""} Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.898015 4832 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.898058 4832 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.908143 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9n2m2"] Mar 12 14:51:06 crc kubenswrapper[4832]: W0312 14:51:06.931567 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb45efb5_4239_4b47_9664_12fd61be0894.slice/crio-6fe0d2269637224e70e070ce9dcc62c0f564837710769f6268fcab2341c82dee WatchSource:0}: Error finding container 6fe0d2269637224e70e070ce9dcc62c0f564837710769f6268fcab2341c82dee: Status 404 returned error can't find the container with id 6fe0d2269637224e70e070ce9dcc62c0f564837710769f6268fcab2341c82dee Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.949363 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.949831 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.949858 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9bd599-747b-470d-941b-fe7d6ee15be1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.949871 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9bd599-747b-470d-941b-fe7d6ee15be1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.949883 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mg44\" (UniqueName: \"kubernetes.io/projected/ef9bd599-747b-470d-941b-fe7d6ee15be1-kube-api-access-8mg44\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:06 crc kubenswrapper[4832]: I0312 14:51:06.954895 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.011089 4832 ???:1] "http: TLS handshake error from 192.168.126.11:42488: no serving certificate available for the kubelet" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.028304 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n2m2" event={"ID":"bb45efb5-4239-4b47-9664-12fd61be0894","Type":"ContainerStarted","Data":"6fe0d2269637224e70e070ce9dcc62c0f564837710769f6268fcab2341c82dee"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.029743 4832 generic.go:334] "Generic (PLEG): container finished" podID="7fdc1c63-8a73-405f-aede-75834651cccc" containerID="dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7" exitCode=0 Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.029796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qbwsc" event={"ID":"7fdc1c63-8a73-405f-aede-75834651cccc","Type":"ContainerDied","Data":"dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.029816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qbwsc" event={"ID":"7fdc1c63-8a73-405f-aede-75834651cccc","Type":"ContainerStarted","Data":"e0f8b68fd71abea03263ac1818eee27790415b3cceb52d56a13401421334b60b"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.032099 4832 generic.go:334] "Generic (PLEG): container finished" podID="17368088-aec0-4319-8575-045b54487a1f" containerID="86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542" exitCode=0 Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.032162 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xql8n" event={"ID":"17368088-aec0-4319-8575-045b54487a1f","Type":"ContainerDied","Data":"86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.032189 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xql8n" event={"ID":"17368088-aec0-4319-8575-045b54487a1f","Type":"ContainerStarted","Data":"5a8254018cef9fa4d972ebbeaae47a499e5ac0277b97edde5f18d82471cef533"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.035154 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" event={"ID":"579d6f1b-e8f5-4d51-9527-41988322b007","Type":"ContainerStarted","Data":"44f049f5ff9474f0a9fafd464a5ae639efa244bcaccab838af6d779462169f51"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.035174 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" event={"ID":"579d6f1b-e8f5-4d51-9527-41988322b007","Type":"ContainerStarted","Data":"802473baa08b1e48d706d869f3f4c8cab8453d52379835e6752d385f8ef99801"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.036842 4832 generic.go:334] "Generic (PLEG): container finished" podID="1276d8a9-5af1-4a3f-a61c-255ed424ee88" containerID="979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5" exitCode=0 Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.036892 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.036905 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" event={"ID":"1276d8a9-5af1-4a3f-a61c-255ed424ee88","Type":"ContainerDied","Data":"979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.036931 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86f5t" event={"ID":"1276d8a9-5af1-4a3f-a61c-255ed424ee88","Type":"ContainerDied","Data":"4d95b1b10193add3054564bbd10dfabe154c5f755774a1fd52133a7e1a8f0b97"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.036945 4832 scope.go:117] "RemoveContainer" containerID="979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.051280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.055776 4832 generic.go:334] "Generic (PLEG): container finished" podID="28ad10d5-8a9a-418b-af56-da46474279fe" containerID="5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904" exitCode=0 Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.055890 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc4cj" event={"ID":"28ad10d5-8a9a-418b-af56-da46474279fe","Type":"ContainerDied","Data":"5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.055921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc4cj" event={"ID":"28ad10d5-8a9a-418b-af56-da46474279fe","Type":"ContainerStarted","Data":"dfa88c9782bd2724616b0d2c812156bae3f648bae7348eb99abe70548fb352bf"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.059196 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.059244 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.064463 4832 generic.go:334] "Generic (PLEG): container finished" podID="ef9bd599-747b-470d-941b-fe7d6ee15be1" containerID="7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583" exitCode=0 Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.064534 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" event={"ID":"ef9bd599-747b-470d-941b-fe7d6ee15be1","Type":"ContainerDied","Data":"7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.064564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" event={"ID":"ef9bd599-747b-470d-941b-fe7d6ee15be1","Type":"ContainerDied","Data":"5f233ff0953e4839d2cb610c5c9ba29880d540c0683598a4f42b886efee8eaf1"} Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.064668 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.084831 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gx8zc" podStartSLOduration=11.084812733 podStartE2EDuration="11.084812733s" podCreationTimestamp="2026-03-12 14:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:07.082084015 +0000 UTC m=+225.726098241" watchObservedRunningTime="2026-03-12 14:51:07.084812733 +0000 UTC m=+225.728826960" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.085609 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4tg76\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.139047 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86f5t"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.143236 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86f5t"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.145865 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.148271 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4rcp"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.149641 4832 scope.go:117] "RemoveContainer" containerID="979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5" Mar 12 14:51:07 crc kubenswrapper[4832]: E0312 14:51:07.150349 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5\": container with ID starting with 979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5 not found: ID does not exist" containerID="979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.150386 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5"} err="failed to get container status \"979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5\": rpc error: code = NotFound desc = could not find container \"979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5\": container with ID starting with 979dcbd84649263cd73ccfed498765de2a810187fc62fe68ae40e5c95516bbe5 not found: ID does not exist" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.150417 4832 scope.go:117] "RemoveContainer" containerID="7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.168442 4832 scope.go:117] "RemoveContainer" containerID="7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583" Mar 12 14:51:07 crc kubenswrapper[4832]: E0312 14:51:07.168894 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583\": container with ID starting with 7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583 not found: ID does not exist" containerID="7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.168931 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583"} err="failed to get container status \"7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583\": rpc error: code = NotFound desc = could not find container \"7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583\": container with ID starting with 7c04c82f84e6c42afb0b49de0f7cec6ef2d58552bb83ff0990dc1d4a9876c583 not found: ID does not exist" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.255623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.378045 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xnxm6"] Mar 12 14:51:07 crc kubenswrapper[4832]: E0312 14:51:07.378290 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9bd599-747b-470d-941b-fe7d6ee15be1" containerName="route-controller-manager" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.378305 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9bd599-747b-470d-941b-fe7d6ee15be1" containerName="route-controller-manager" Mar 12 14:51:07 crc kubenswrapper[4832]: E0312 14:51:07.378316 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1276d8a9-5af1-4a3f-a61c-255ed424ee88" containerName="controller-manager" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.378324 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1276d8a9-5af1-4a3f-a61c-255ed424ee88" containerName="controller-manager" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.378457 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1276d8a9-5af1-4a3f-a61c-255ed424ee88" containerName="controller-manager" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.378478 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9bd599-747b-470d-941b-fe7d6ee15be1" containerName="route-controller-manager" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.379299 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.382530 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.391657 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnxm6"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.457260 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgpj\" (UniqueName: \"kubernetes.io/projected/986f5b8c-a467-455c-9b4c-e53572535143-kube-api-access-5jgpj\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.457311 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-catalog-content\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.457383 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-utilities\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.512197 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8wfwt" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.532120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4tg76"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.558314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-utilities\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.558412 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgpj\" (UniqueName: \"kubernetes.io/projected/986f5b8c-a467-455c-9b4c-e53572535143-kube-api-access-5jgpj\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.558434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-catalog-content\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.558930 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-catalog-content\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.559291 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-utilities\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.598936 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgpj\" (UniqueName: \"kubernetes.io/projected/986f5b8c-a467-455c-9b4c-e53572535143-kube-api-access-5jgpj\") pod \"redhat-marketplace-xnxm6\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.663427 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77d55c7d5-lmvd5"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.664215 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.666592 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.666701 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.667422 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.667724 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.667736 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.668047 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.668310 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.668494 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.669397 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.670928 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.671087 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.671237 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.672955 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.675639 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.676635 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.677710 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.680398 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d55c7d5-lmvd5"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.709325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766036 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54kzm\" (UniqueName: \"kubernetes.io/projected/f34feff1-dcd4-4c93-a6f1-355d1f506425-kube-api-access-54kzm\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-client-ca\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-config\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f34feff1-dcd4-4c93-a6f1-355d1f506425-serving-cert\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbszh\" (UniqueName: \"kubernetes.io/projected/87bce1d1-647c-4317-81eb-2fe7564306b5-kube-api-access-gbszh\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766839 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-proxy-ca-bundles\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766873 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-config\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.766932 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-client-ca\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.767089 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bce1d1-647c-4317-81eb-2fe7564306b5-serving-cert\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.773620 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zjrmj"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.779769 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.783460 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjrmj"] Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.857371 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:07 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:07 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:07 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.857674 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-config\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txw2k\" (UniqueName: \"kubernetes.io/projected/67ae4e40-af35-414c-8be7-4f9776319561-kube-api-access-txw2k\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-catalog-content\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f34feff1-dcd4-4c93-a6f1-355d1f506425-serving-cert\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868664 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbszh\" (UniqueName: \"kubernetes.io/projected/87bce1d1-647c-4317-81eb-2fe7564306b5-kube-api-access-gbszh\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-proxy-ca-bundles\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-config\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-client-ca\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-utilities\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bce1d1-647c-4317-81eb-2fe7564306b5-serving-cert\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868879 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54kzm\" (UniqueName: \"kubernetes.io/projected/f34feff1-dcd4-4c93-a6f1-355d1f506425-kube-api-access-54kzm\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.868942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-client-ca\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.869889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-client-ca\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.870684 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-client-ca\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.870737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-config\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.871336 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-config\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.871790 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-proxy-ca-bundles\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.875310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bce1d1-647c-4317-81eb-2fe7564306b5-serving-cert\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.884060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f34feff1-dcd4-4c93-a6f1-355d1f506425-serving-cert\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.886653 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbszh\" (UniqueName: \"kubernetes.io/projected/87bce1d1-647c-4317-81eb-2fe7564306b5-kube-api-access-gbszh\") pod \"controller-manager-77d55c7d5-lmvd5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.887082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54kzm\" (UniqueName: \"kubernetes.io/projected/f34feff1-dcd4-4c93-a6f1-355d1f506425-kube-api-access-54kzm\") pod \"route-controller-manager-6dd9bf79c5-ztnrx\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.974122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-utilities\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.974241 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txw2k\" (UniqueName: \"kubernetes.io/projected/67ae4e40-af35-414c-8be7-4f9776319561-kube-api-access-txw2k\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.974270 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-catalog-content\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.975123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-utilities\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.975663 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-catalog-content\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:07 crc kubenswrapper[4832]: I0312 14:51:07.991037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txw2k\" (UniqueName: \"kubernetes.io/projected/67ae4e40-af35-414c-8be7-4f9776319561-kube-api-access-txw2k\") pod \"redhat-marketplace-zjrmj\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.004184 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.004202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.008784 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.009626 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.013520 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.015839 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.018084 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.072434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" event={"ID":"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d","Type":"ContainerStarted","Data":"7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768"} Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.072483 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" event={"ID":"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d","Type":"ContainerStarted","Data":"bdc68df9146d3e683128a1312418893dc21b89872c29b71ba826df2152f2ab94"} Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.072564 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.077061 4832 generic.go:334] "Generic (PLEG): container finished" podID="bb45efb5-4239-4b47-9664-12fd61be0894" containerID="7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572" exitCode=0 Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.077137 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n2m2" event={"ID":"bb45efb5-4239-4b47-9664-12fd61be0894","Type":"ContainerDied","Data":"7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572"} Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.100156 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" podStartSLOduration=162.100138973 podStartE2EDuration="2m42.100138973s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:08.097098735 +0000 UTC m=+226.741112971" watchObservedRunningTime="2026-03-12 14:51:08.100138973 +0000 UTC m=+226.744153189" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.102690 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.177971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeeb996-3111-455c-a029-efb16a638049-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.178009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeeb996-3111-455c-a029-efb16a638049-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.204583 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnxm6"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.270979 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.281572 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeeb996-3111-455c-a029-efb16a638049-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.281621 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeeb996-3111-455c-a029-efb16a638049-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.281787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeeb996-3111-455c-a029-efb16a638049-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.299473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeeb996-3111-455c-a029-efb16a638049-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.321723 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d55c7d5-lmvd5"] Mar 12 14:51:08 crc kubenswrapper[4832]: W0312 14:51:08.328685 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87bce1d1_647c_4317_81eb_2fe7564306b5.slice/crio-4e0389c5a99b5d184b4f93eee7187a3b21f7cb1042047782e2b3b6577457d719 WatchSource:0}: Error finding container 4e0389c5a99b5d184b4f93eee7187a3b21f7cb1042047782e2b3b6577457d719: Status 404 returned error can't find the container with id 4e0389c5a99b5d184b4f93eee7187a3b21f7cb1042047782e2b3b6577457d719 Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.333848 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.370769 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7pf4z"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.372035 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.374796 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.377787 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.378842 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7pf4z"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.384722 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8x4hd" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.419814 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz5vn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.419864 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hz5vn" podUID="5d4068e0-53ed-433d-9657-ff75730d43a6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.420221 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz5vn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.420240 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hz5vn" podUID="5d4068e0-53ed-433d-9657-ff75730d43a6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.489779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-catalog-content\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.491179 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv7gl\" (UniqueName: \"kubernetes.io/projected/945e7b12-b4c7-45e9-9956-6dda3eed3c62-kube-api-access-kv7gl\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.491266 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-utilities\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.573202 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.574458 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.576002 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.577635 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.585626 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.586300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjrmj"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.604272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-catalog-content\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.604331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv7gl\" (UniqueName: \"kubernetes.io/projected/945e7b12-b4c7-45e9-9956-6dda3eed3c62-kube-api-access-kv7gl\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.604428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-utilities\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.605076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-utilities\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.605145 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-catalog-content\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.632470 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv7gl\" (UniqueName: \"kubernetes.io/projected/945e7b12-b4c7-45e9-9956-6dda3eed3c62-kube-api-access-kv7gl\") pod \"redhat-operators-7pf4z\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.666297 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1276d8a9-5af1-4a3f-a61c-255ed424ee88" path="/var/lib/kubelet/pods/1276d8a9-5af1-4a3f-a61c-255ed424ee88/volumes" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.668172 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.668787 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9bd599-747b-470d-941b-fe7d6ee15be1" path="/var/lib/kubelet/pods/ef9bd599-747b-470d-941b-fe7d6ee15be1/volumes" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.707074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6886971c-1f41-4c75-8031-f115776d8494-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.707137 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6886971c-1f41-4c75-8031-f115776d8494-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.726789 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.778197 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fwqfd"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.779826 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.784884 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwqfd"] Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.814542 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6886971c-1f41-4c75-8031-f115776d8494-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.814608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6886971c-1f41-4c75-8031-f115776d8494-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.814687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6886971c-1f41-4c75-8031-f115776d8494-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.835659 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6886971c-1f41-4c75-8031-f115776d8494-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.854769 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.858209 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:08 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:08 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:08 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.858298 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.917395 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-catalog-content\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.917587 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlr5c\" (UniqueName: \"kubernetes.io/projected/584742b5-4cf7-4fcf-8b62-ad79df0bc737-kube-api-access-nlr5c\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:08 crc kubenswrapper[4832]: I0312 14:51:08.917655 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-utilities\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.019560 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-utilities\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.019679 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-catalog-content\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.019739 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlr5c\" (UniqueName: \"kubernetes.io/projected/584742b5-4cf7-4fcf-8b62-ad79df0bc737-kube-api-access-nlr5c\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.021326 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-catalog-content\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.038905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-utilities\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.042310 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.050483 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.055039 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlr5c\" (UniqueName: \"kubernetes.io/projected/584742b5-4cf7-4fcf-8b62-ad79df0bc737-kube-api-access-nlr5c\") pod \"redhat-operators-fwqfd\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.055590 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.133566 4832 generic.go:334] "Generic (PLEG): container finished" podID="67ae4e40-af35-414c-8be7-4f9776319561" containerID="5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942" exitCode=0 Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.133622 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjrmj" event={"ID":"67ae4e40-af35-414c-8be7-4f9776319561","Type":"ContainerDied","Data":"5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.133649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjrmj" event={"ID":"67ae4e40-af35-414c-8be7-4f9776319561","Type":"ContainerStarted","Data":"a0750ed1a5b65711d3e4528f95094fc118f99f1acc16712b4a0363354d85307b"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.137481 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.197705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" event={"ID":"87bce1d1-647c-4317-81eb-2fe7564306b5","Type":"ContainerStarted","Data":"b66ca919b749c6c26113bb81a0cabde5b274f9660b424a5526cbf3614c94ff97"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.197747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" event={"ID":"87bce1d1-647c-4317-81eb-2fe7564306b5","Type":"ContainerStarted","Data":"4e0389c5a99b5d184b4f93eee7187a3b21f7cb1042047782e2b3b6577457d719"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.198420 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.236186 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" podStartSLOduration=3.236162712 podStartE2EDuration="3.236162712s" podCreationTimestamp="2026-03-12 14:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:09.234661059 +0000 UTC m=+227.878675285" watchObservedRunningTime="2026-03-12 14:51:09.236162712 +0000 UTC m=+227.880176948" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.242923 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.267279 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" event={"ID":"f34feff1-dcd4-4c93-a6f1-355d1f506425","Type":"ContainerStarted","Data":"6d00ae9d0afa89f0065393d2534c38cbbf064fc394ee1ab6a7e0ae40e8187019"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.267326 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" event={"ID":"f34feff1-dcd4-4c93-a6f1-355d1f506425","Type":"ContainerStarted","Data":"a8ee4d997f9aa27700aee58b38bc262e60d58fe1e6ec3d7fe5cc1c5de861791c"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.267540 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.282729 4832 generic.go:334] "Generic (PLEG): container finished" podID="986f5b8c-a467-455c-9b4c-e53572535143" containerID="122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535" exitCode=0 Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.287921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnxm6" event={"ID":"986f5b8c-a467-455c-9b4c-e53572535143","Type":"ContainerDied","Data":"122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.287982 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnxm6" event={"ID":"986f5b8c-a467-455c-9b4c-e53572535143","Type":"ContainerStarted","Data":"2757f6911bf4d1355090a38d007e05f7f4c893aecddcefef4a92310f761707e7"} Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.302858 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.420311 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" podStartSLOduration=3.4202928200000002 podStartE2EDuration="3.42029282s" podCreationTimestamp="2026-03-12 14:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:09.358288793 +0000 UTC m=+228.002303019" watchObservedRunningTime="2026-03-12 14:51:09.42029282 +0000 UTC m=+228.064307046" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.443958 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7pf4z"] Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.521115 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.522295 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.536209 4832 patch_prober.go:28] interesting pod/console-f9d7485db-42j9g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.536265 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-42j9g" podUID="82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.616242 4832 ???:1] "http: TLS handshake error from 192.168.126.11:53838: no serving certificate available for the kubelet" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.826981 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.865053 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:09 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:09 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:09 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.865099 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:09 crc kubenswrapper[4832]: I0312 14:51:09.905220 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwqfd"] Mar 12 14:51:09 crc kubenswrapper[4832]: W0312 14:51:09.965913 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod584742b5_4cf7_4fcf_8b62_ad79df0bc737.slice/crio-7e3d26b9dd47c2f926bae34c978bf0adcacff8d99af12d74560bc5efc5409354 WatchSource:0}: Error finding container 7e3d26b9dd47c2f926bae34c978bf0adcacff8d99af12d74560bc5efc5409354: Status 404 returned error can't find the container with id 7e3d26b9dd47c2f926bae34c978bf0adcacff8d99af12d74560bc5efc5409354 Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.293125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6886971c-1f41-4c75-8031-f115776d8494","Type":"ContainerStarted","Data":"1dd96fc6d612cd1e176e962a0715a3187ed74df2369ad99c67604bd9847a9de5"} Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.295474 4832 generic.go:334] "Generic (PLEG): container finished" podID="0bf32718-d22d-4e55-b158-43a02ef6a67f" containerID="419df817519c7b2b9f1e806963b4f568797fe722156998155b51647be6b23668" exitCode=0 Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.295548 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" event={"ID":"0bf32718-d22d-4e55-b158-43a02ef6a67f","Type":"ContainerDied","Data":"419df817519c7b2b9f1e806963b4f568797fe722156998155b51647be6b23668"} Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.299005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bbeeb996-3111-455c-a029-efb16a638049","Type":"ContainerStarted","Data":"c4d33d2f18358d02e40e69413425106c02ee7e30c81ecfe4ddfdc80e3c3614b1"} Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.299035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bbeeb996-3111-455c-a029-efb16a638049","Type":"ContainerStarted","Data":"6e04b061772e2798182d9a59d0e69f96cb8a946559a891359ea0cfc4ab07912b"} Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.301254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwqfd" event={"ID":"584742b5-4cf7-4fcf-8b62-ad79df0bc737","Type":"ContainerStarted","Data":"7e3d26b9dd47c2f926bae34c978bf0adcacff8d99af12d74560bc5efc5409354"} Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.303770 4832 generic.go:334] "Generic (PLEG): container finished" podID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerID="575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec" exitCode=0 Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.303819 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pf4z" event={"ID":"945e7b12-b4c7-45e9-9956-6dda3eed3c62","Type":"ContainerDied","Data":"575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec"} Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.303861 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pf4z" event={"ID":"945e7b12-b4c7-45e9-9956-6dda3eed3c62","Type":"ContainerStarted","Data":"726a4d754524f683074e0bc1ed4cd008f67a4917dcda3d5084f2be846f97aabd"} Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.331169 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.331151569 podStartE2EDuration="3.331151569s" podCreationTimestamp="2026-03-12 14:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:10.327271037 +0000 UTC m=+228.971285263" watchObservedRunningTime="2026-03-12 14:51:10.331151569 +0000 UTC m=+228.975165795" Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.856368 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:10 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:10 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:10 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:10 crc kubenswrapper[4832]: I0312 14:51:10.856439 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.315696 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6886971c-1f41-4c75-8031-f115776d8494","Type":"ContainerStarted","Data":"e7f91731194433543692dadb29402be8f7981ca8651eb423d4db7dcb00b0a4cc"} Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.319631 4832 generic.go:334] "Generic (PLEG): container finished" podID="bbeeb996-3111-455c-a029-efb16a638049" containerID="c4d33d2f18358d02e40e69413425106c02ee7e30c81ecfe4ddfdc80e3c3614b1" exitCode=0 Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.319700 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bbeeb996-3111-455c-a029-efb16a638049","Type":"ContainerDied","Data":"c4d33d2f18358d02e40e69413425106c02ee7e30c81ecfe4ddfdc80e3c3614b1"} Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.321801 4832 generic.go:334] "Generic (PLEG): container finished" podID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerID="31b888b959f86a2a4c60a7cd9ffd5a6324ee2c514a034b75a31d3fb61be1fbd6" exitCode=0 Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.322829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwqfd" event={"ID":"584742b5-4cf7-4fcf-8b62-ad79df0bc737","Type":"ContainerDied","Data":"31b888b959f86a2a4c60a7cd9ffd5a6324ee2c514a034b75a31d3fb61be1fbd6"} Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.334044 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.334023309 podStartE2EDuration="3.334023309s" podCreationTimestamp="2026-03-12 14:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:11.333930926 +0000 UTC m=+229.977945152" watchObservedRunningTime="2026-03-12 14:51:11.334023309 +0000 UTC m=+229.978037535" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.468266 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-svx7c" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.639090 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.775141 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/0bf32718-d22d-4e55-b158-43a02ef6a67f-kube-api-access-f48kh\") pod \"0bf32718-d22d-4e55-b158-43a02ef6a67f\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.775245 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bf32718-d22d-4e55-b158-43a02ef6a67f-secret-volume\") pod \"0bf32718-d22d-4e55-b158-43a02ef6a67f\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.775274 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf32718-d22d-4e55-b158-43a02ef6a67f-config-volume\") pod \"0bf32718-d22d-4e55-b158-43a02ef6a67f\" (UID: \"0bf32718-d22d-4e55-b158-43a02ef6a67f\") " Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.776426 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bf32718-d22d-4e55-b158-43a02ef6a67f-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bf32718-d22d-4e55-b158-43a02ef6a67f" (UID: "0bf32718-d22d-4e55-b158-43a02ef6a67f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.793646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf32718-d22d-4e55-b158-43a02ef6a67f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0bf32718-d22d-4e55-b158-43a02ef6a67f" (UID: "0bf32718-d22d-4e55-b158-43a02ef6a67f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.798047 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf32718-d22d-4e55-b158-43a02ef6a67f-kube-api-access-f48kh" (OuterVolumeSpecName: "kube-api-access-f48kh") pod "0bf32718-d22d-4e55-b158-43a02ef6a67f" (UID: "0bf32718-d22d-4e55-b158-43a02ef6a67f"). InnerVolumeSpecName "kube-api-access-f48kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.854360 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jthtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:51:11 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Mar 12 14:51:11 crc kubenswrapper[4832]: [+]process-running ok Mar 12 14:51:11 crc kubenswrapper[4832]: healthz check failed Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.854428 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jthtj" podUID="51605fc6-0da6-4a38-b44a-d8d47080ff6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.876671 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/0bf32718-d22d-4e55-b158-43a02ef6a67f-kube-api-access-f48kh\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.876704 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bf32718-d22d-4e55-b158-43a02ef6a67f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:11 crc kubenswrapper[4832]: I0312 14:51:11.876717 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf32718-d22d-4e55-b158-43a02ef6a67f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.107429 4832 ???:1] "http: TLS handshake error from 192.168.126.11:53844: no serving certificate available for the kubelet" Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.341899 4832 generic.go:334] "Generic (PLEG): container finished" podID="6886971c-1f41-4c75-8031-f115776d8494" containerID="e7f91731194433543692dadb29402be8f7981ca8651eb423d4db7dcb00b0a4cc" exitCode=0 Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.341972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6886971c-1f41-4c75-8031-f115776d8494","Type":"ContainerDied","Data":"e7f91731194433543692dadb29402be8f7981ca8651eb423d4db7dcb00b0a4cc"} Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.350454 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.352936 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk" event={"ID":"0bf32718-d22d-4e55-b158-43a02ef6a67f","Type":"ContainerDied","Data":"631f747bb4cb3108c32e4feb329441df54615304ea6f58d5ca2d889c5b7adb4b"} Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.353113 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631f747bb4cb3108c32e4feb329441df54615304ea6f58d5ca2d889c5b7adb4b" Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.854612 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:51:12 crc kubenswrapper[4832]: I0312 14:51:12.856967 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jthtj" Mar 12 14:51:14 crc kubenswrapper[4832]: I0312 14:51:14.762017 4832 ???:1] "http: TLS handshake error from 192.168.126.11:53858: no serving certificate available for the kubelet" Mar 12 14:51:18 crc kubenswrapper[4832]: I0312 14:51:18.182011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:51:18 crc kubenswrapper[4832]: I0312 14:51:18.188016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3abc18e-3b7e-4afe-b35b-3b619290e875-metrics-certs\") pod \"network-metrics-daemon-lmjrb\" (UID: \"c3abc18e-3b7e-4afe-b35b-3b619290e875\") " pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:51:18 crc kubenswrapper[4832]: I0312 14:51:18.387489 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lmjrb" Mar 12 14:51:18 crc kubenswrapper[4832]: I0312 14:51:18.422714 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hz5vn" Mar 12 14:51:18 crc kubenswrapper[4832]: I0312 14:51:18.929637 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.095306 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeeb996-3111-455c-a029-efb16a638049-kubelet-dir\") pod \"bbeeb996-3111-455c-a029-efb16a638049\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.095771 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeeb996-3111-455c-a029-efb16a638049-kube-api-access\") pod \"bbeeb996-3111-455c-a029-efb16a638049\" (UID: \"bbeeb996-3111-455c-a029-efb16a638049\") " Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.095423 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbeeb996-3111-455c-a029-efb16a638049-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bbeeb996-3111-455c-a029-efb16a638049" (UID: "bbeeb996-3111-455c-a029-efb16a638049"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.096143 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeeb996-3111-455c-a029-efb16a638049-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.099128 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbeeb996-3111-455c-a029-efb16a638049-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bbeeb996-3111-455c-a029-efb16a638049" (UID: "bbeeb996-3111-455c-a029-efb16a638049"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.198188 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeeb996-3111-455c-a029-efb16a638049-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.390778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bbeeb996-3111-455c-a029-efb16a638049","Type":"ContainerDied","Data":"6e04b061772e2798182d9a59d0e69f96cb8a946559a891359ea0cfc4ab07912b"} Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.390824 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e04b061772e2798182d9a59d0e69f96cb8a946559a891359ea0cfc4ab07912b" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.390829 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.557652 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:51:19 crc kubenswrapper[4832]: I0312 14:51:19.565352 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.276288 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.408372 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6886971c-1f41-4c75-8031-f115776d8494","Type":"ContainerDied","Data":"1dd96fc6d612cd1e176e962a0715a3187ed74df2369ad99c67604bd9847a9de5"} Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.408419 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd96fc6d612cd1e176e962a0715a3187ed74df2369ad99c67604bd9847a9de5" Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.408458 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.442887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6886971c-1f41-4c75-8031-f115776d8494-kube-api-access\") pod \"6886971c-1f41-4c75-8031-f115776d8494\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.443049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6886971c-1f41-4c75-8031-f115776d8494-kubelet-dir\") pod \"6886971c-1f41-4c75-8031-f115776d8494\" (UID: \"6886971c-1f41-4c75-8031-f115776d8494\") " Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.443331 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6886971c-1f41-4c75-8031-f115776d8494-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6886971c-1f41-4c75-8031-f115776d8494" (UID: "6886971c-1f41-4c75-8031-f115776d8494"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.443432 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6886971c-1f41-4c75-8031-f115776d8494-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.448306 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6886971c-1f41-4c75-8031-f115776d8494-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6886971c-1f41-4c75-8031-f115776d8494" (UID: "6886971c-1f41-4c75-8031-f115776d8494"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:22 crc kubenswrapper[4832]: I0312 14:51:22.544713 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6886971c-1f41-4c75-8031-f115776d8494-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:23 crc kubenswrapper[4832]: E0312 14:51:23.732015 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 12 14:51:23 crc kubenswrapper[4832]: E0312 14:51:23.732363 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 14:51:23 crc kubenswrapper[4832]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 12 14:51:23 crc kubenswrapper[4832]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkx8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29555450-w9jtm_openshift-infra(35da9b9e-133b-4d7f-a32e-908d9fc7734b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 12 14:51:23 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 14:51:23 crc kubenswrapper[4832]: E0312 14:51:23.733686 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" podUID="35da9b9e-133b-4d7f-a32e-908d9fc7734b" Mar 12 14:51:24 crc kubenswrapper[4832]: E0312 14:51:24.424151 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" podUID="35da9b9e-133b-4d7f-a32e-908d9fc7734b" Mar 12 14:51:25 crc kubenswrapper[4832]: I0312 14:51:25.693651 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d55c7d5-lmvd5"] Mar 12 14:51:25 crc kubenswrapper[4832]: I0312 14:51:25.693851 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" podUID="87bce1d1-647c-4317-81eb-2fe7564306b5" containerName="controller-manager" containerID="cri-o://b66ca919b749c6c26113bb81a0cabde5b274f9660b424a5526cbf3614c94ff97" gracePeriod=30 Mar 12 14:51:25 crc kubenswrapper[4832]: I0312 14:51:25.712436 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx"] Mar 12 14:51:25 crc kubenswrapper[4832]: I0312 14:51:25.713030 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" podUID="f34feff1-dcd4-4c93-a6f1-355d1f506425" containerName="route-controller-manager" containerID="cri-o://6d00ae9d0afa89f0065393d2534c38cbbf064fc394ee1ab6a7e0ae40e8187019" gracePeriod=30 Mar 12 14:51:26 crc kubenswrapper[4832]: I0312 14:51:26.314100 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:51:26 crc kubenswrapper[4832]: I0312 14:51:26.314437 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:51:26 crc kubenswrapper[4832]: I0312 14:51:26.432170 4832 generic.go:334] "Generic (PLEG): container finished" podID="87bce1d1-647c-4317-81eb-2fe7564306b5" containerID="b66ca919b749c6c26113bb81a0cabde5b274f9660b424a5526cbf3614c94ff97" exitCode=0 Mar 12 14:51:26 crc kubenswrapper[4832]: I0312 14:51:26.432222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" event={"ID":"87bce1d1-647c-4317-81eb-2fe7564306b5","Type":"ContainerDied","Data":"b66ca919b749c6c26113bb81a0cabde5b274f9660b424a5526cbf3614c94ff97"} Mar 12 14:51:26 crc kubenswrapper[4832]: I0312 14:51:26.434654 4832 generic.go:334] "Generic (PLEG): container finished" podID="f34feff1-dcd4-4c93-a6f1-355d1f506425" containerID="6d00ae9d0afa89f0065393d2534c38cbbf064fc394ee1ab6a7e0ae40e8187019" exitCode=0 Mar 12 14:51:26 crc kubenswrapper[4832]: I0312 14:51:26.434695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" event={"ID":"f34feff1-dcd4-4c93-a6f1-355d1f506425","Type":"ContainerDied","Data":"6d00ae9d0afa89f0065393d2534c38cbbf064fc394ee1ab6a7e0ae40e8187019"} Mar 12 14:51:27 crc kubenswrapper[4832]: I0312 14:51:27.260871 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:51:28 crc kubenswrapper[4832]: I0312 14:51:28.005290 4832 patch_prober.go:28] interesting pod/controller-manager-77d55c7d5-lmvd5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Mar 12 14:51:28 crc kubenswrapper[4832]: I0312 14:51:28.005969 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" podUID="87bce1d1-647c-4317-81eb-2fe7564306b5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Mar 12 14:51:28 crc kubenswrapper[4832]: I0312 14:51:28.005387 4832 patch_prober.go:28] interesting pod/route-controller-manager-6dd9bf79c5-ztnrx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 12 14:51:28 crc kubenswrapper[4832]: I0312 14:51:28.006344 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" podUID="f34feff1-dcd4-4c93-a6f1-355d1f506425" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 12 14:51:30 crc kubenswrapper[4832]: I0312 14:51:30.076236 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lmjrb"] Mar 12 14:51:35 crc kubenswrapper[4832]: I0312 14:51:35.271528 4832 ???:1] "http: TLS handshake error from 192.168.126.11:48258: no serving certificate available for the kubelet" Mar 12 14:51:36 crc kubenswrapper[4832]: E0312 14:51:36.191805 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 14:51:36 crc kubenswrapper[4832]: E0312 14:51:36.191981 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jgpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xnxm6_openshift-marketplace(986f5b8c-a467-455c-9b4c-e53572535143): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 14:51:36 crc kubenswrapper[4832]: E0312 14:51:36.193424 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xnxm6" podUID="986f5b8c-a467-455c-9b4c-e53572535143" Mar 12 14:51:39 crc kubenswrapper[4832]: I0312 14:51:39.006001 4832 patch_prober.go:28] interesting pod/route-controller-manager-6dd9bf79c5-ztnrx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:51:39 crc kubenswrapper[4832]: I0312 14:51:39.006389 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" podUID="f34feff1-dcd4-4c93-a6f1-355d1f506425" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:51:39 crc kubenswrapper[4832]: I0312 14:51:39.006048 4832 patch_prober.go:28] interesting pod/controller-manager-77d55c7d5-lmvd5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: i/o timeout" start-of-body= Mar 12 14:51:39 crc kubenswrapper[4832]: I0312 14:51:39.006739 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" podUID="87bce1d1-647c-4317-81eb-2fe7564306b5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: i/o timeout" Mar 12 14:51:39 crc kubenswrapper[4832]: I0312 14:51:39.126710 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bb9w7" Mar 12 14:51:39 crc kubenswrapper[4832]: E0312 14:51:39.316247 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 14:51:39 crc kubenswrapper[4832]: E0312 14:51:39.316428 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmcgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xql8n_openshift-marketplace(17368088-aec0-4319-8575-045b54487a1f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 14:51:39 crc kubenswrapper[4832]: E0312 14:51:39.317617 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xql8n" podUID="17368088-aec0-4319-8575-045b54487a1f" Mar 12 14:51:40 crc kubenswrapper[4832]: W0312 14:51:40.795394 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3abc18e_3b7e_4afe_b35b_3b619290e875.slice/crio-609759b192b19fa5c758a808746c06d20d1ed8f913878ec22fafcd4af6c6accd WatchSource:0}: Error finding container 609759b192b19fa5c758a808746c06d20d1ed8f913878ec22fafcd4af6c6accd: Status 404 returned error can't find the container with id 609759b192b19fa5c758a808746c06d20d1ed8f913878ec22fafcd4af6c6accd Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.803840 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xql8n" podUID="17368088-aec0-4319-8575-045b54487a1f" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.803914 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xnxm6" podUID="986f5b8c-a467-455c-9b4c-e53572535143" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.844676 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.849578 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.881149 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.881310 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxrbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9n2m2_openshift-marketplace(bb45efb5-4239-4b47-9664-12fd61be0894): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.882679 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9n2m2" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887137 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p"] Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.887373 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6886971c-1f41-4c75-8031-f115776d8494" containerName="pruner" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887388 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6886971c-1f41-4c75-8031-f115776d8494" containerName="pruner" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.887400 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bce1d1-647c-4317-81eb-2fe7564306b5" containerName="controller-manager" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887408 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bce1d1-647c-4317-81eb-2fe7564306b5" containerName="controller-manager" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.887419 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbeeb996-3111-455c-a029-efb16a638049" containerName="pruner" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887426 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbeeb996-3111-455c-a029-efb16a638049" containerName="pruner" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.887436 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf32718-d22d-4e55-b158-43a02ef6a67f" containerName="collect-profiles" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887442 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf32718-d22d-4e55-b158-43a02ef6a67f" containerName="collect-profiles" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.887451 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34feff1-dcd4-4c93-a6f1-355d1f506425" containerName="route-controller-manager" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887456 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34feff1-dcd4-4c93-a6f1-355d1f506425" containerName="route-controller-manager" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887568 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6886971c-1f41-4c75-8031-f115776d8494" containerName="pruner" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887582 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34feff1-dcd4-4c93-a6f1-355d1f506425" containerName="route-controller-manager" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887589 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf32718-d22d-4e55-b158-43a02ef6a67f" containerName="collect-profiles" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887603 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bce1d1-647c-4317-81eb-2fe7564306b5" containerName="controller-manager" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887611 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbeeb996-3111-455c-a029-efb16a638049" containerName="pruner" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.887946 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.894767 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.894925 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-727pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gc4cj_openshift-marketplace(28ad10d5-8a9a-418b-af56-da46474279fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.895768 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p"] Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.896126 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gc4cj" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.900746 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-config\") pod \"87bce1d1-647c-4317-81eb-2fe7564306b5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.900813 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-client-ca\") pod \"87bce1d1-647c-4317-81eb-2fe7564306b5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.900856 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bce1d1-647c-4317-81eb-2fe7564306b5-serving-cert\") pod \"87bce1d1-647c-4317-81eb-2fe7564306b5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.900747 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.900884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbszh\" (UniqueName: \"kubernetes.io/projected/87bce1d1-647c-4317-81eb-2fe7564306b5-kube-api-access-gbszh\") pod \"87bce1d1-647c-4317-81eb-2fe7564306b5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.900976 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54kzm\" (UniqueName: \"kubernetes.io/projected/f34feff1-dcd4-4c93-a6f1-355d1f506425-kube-api-access-54kzm\") pod \"f34feff1-dcd4-4c93-a6f1-355d1f506425\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-proxy-ca-bundles\") pod \"87bce1d1-647c-4317-81eb-2fe7564306b5\" (UID: \"87bce1d1-647c-4317-81eb-2fe7564306b5\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901066 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f34feff1-dcd4-4c93-a6f1-355d1f506425-serving-cert\") pod \"f34feff1-dcd4-4c93-a6f1-355d1f506425\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-config\") pod \"f34feff1-dcd4-4c93-a6f1-355d1f506425\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-client-ca\") pod \"f34feff1-dcd4-4c93-a6f1-355d1f506425\" (UID: \"f34feff1-dcd4-4c93-a6f1-355d1f506425\") " Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901476 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "87bce1d1-647c-4317-81eb-2fe7564306b5" (UID: "87bce1d1-647c-4317-81eb-2fe7564306b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.900999 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txw2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zjrmj_openshift-marketplace(67ae4e40-af35-414c-8be7-4f9776319561): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8kvl\" (UniqueName: \"kubernetes.io/projected/246fe481-818a-4f44-8397-7723c7cd7afe-kube-api-access-p8kvl\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/246fe481-818a-4f44-8397-7723c7cd7afe-serving-cert\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-config\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-client-ca\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.901992 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.902108 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "87bce1d1-647c-4317-81eb-2fe7564306b5" (UID: "87bce1d1-647c-4317-81eb-2fe7564306b5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.902741 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zjrmj" podUID="67ae4e40-af35-414c-8be7-4f9776319561" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.902842 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-client-ca" (OuterVolumeSpecName: "client-ca") pod "f34feff1-dcd4-4c93-a6f1-355d1f506425" (UID: "f34feff1-dcd4-4c93-a6f1-355d1f506425"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.904399 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-config" (OuterVolumeSpecName: "config") pod "87bce1d1-647c-4317-81eb-2fe7564306b5" (UID: "87bce1d1-647c-4317-81eb-2fe7564306b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.904590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-config" (OuterVolumeSpecName: "config") pod "f34feff1-dcd4-4c93-a6f1-355d1f506425" (UID: "f34feff1-dcd4-4c93-a6f1-355d1f506425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.908430 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34feff1-dcd4-4c93-a6f1-355d1f506425-kube-api-access-54kzm" (OuterVolumeSpecName: "kube-api-access-54kzm") pod "f34feff1-dcd4-4c93-a6f1-355d1f506425" (UID: "f34feff1-dcd4-4c93-a6f1-355d1f506425"). InnerVolumeSpecName "kube-api-access-54kzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.908448 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bce1d1-647c-4317-81eb-2fe7564306b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87bce1d1-647c-4317-81eb-2fe7564306b5" (UID: "87bce1d1-647c-4317-81eb-2fe7564306b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.909053 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34feff1-dcd4-4c93-a6f1-355d1f506425-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f34feff1-dcd4-4c93-a6f1-355d1f506425" (UID: "f34feff1-dcd4-4c93-a6f1-355d1f506425"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: I0312 14:51:40.909746 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bce1d1-647c-4317-81eb-2fe7564306b5-kube-api-access-gbszh" (OuterVolumeSpecName: "kube-api-access-gbszh") pod "87bce1d1-647c-4317-81eb-2fe7564306b5" (UID: "87bce1d1-647c-4317-81eb-2fe7564306b5"). InnerVolumeSpecName "kube-api-access-gbszh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.962088 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.962239 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59fzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qbwsc_openshift-marketplace(7fdc1c63-8a73-405f-aede-75834651cccc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 14:51:40 crc kubenswrapper[4832]: E0312 14:51:40.963432 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qbwsc" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-config\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-client-ca\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8kvl\" (UniqueName: \"kubernetes.io/projected/246fe481-818a-4f44-8397-7723c7cd7afe-kube-api-access-p8kvl\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003546 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/246fe481-818a-4f44-8397-7723c7cd7afe-serving-cert\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003738 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003752 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003761 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bce1d1-647c-4317-81eb-2fe7564306b5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003772 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbszh\" (UniqueName: \"kubernetes.io/projected/87bce1d1-647c-4317-81eb-2fe7564306b5-kube-api-access-gbszh\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003781 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54kzm\" (UniqueName: \"kubernetes.io/projected/f34feff1-dcd4-4c93-a6f1-355d1f506425-kube-api-access-54kzm\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003802 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87bce1d1-647c-4317-81eb-2fe7564306b5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003811 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f34feff1-dcd4-4c93-a6f1-355d1f506425-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.003819 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34feff1-dcd4-4c93-a6f1-355d1f506425-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.004623 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-client-ca\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.005101 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-config\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.006661 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/246fe481-818a-4f44-8397-7723c7cd7afe-serving-cert\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.017547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8kvl\" (UniqueName: \"kubernetes.io/projected/246fe481-818a-4f44-8397-7723c7cd7afe-kube-api-access-p8kvl\") pod \"route-controller-manager-5b67968f5c-q5q4p\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.204213 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.205840 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.208663 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.213134 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.213187 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.214032 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.308399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f434b0ff-2950-48a3-85a1-f33d7a078da1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.308686 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f434b0ff-2950-48a3-85a1-f33d7a078da1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.409796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f434b0ff-2950-48a3-85a1-f33d7a078da1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.409871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f434b0ff-2950-48a3-85a1-f33d7a078da1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.409923 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f434b0ff-2950-48a3-85a1-f33d7a078da1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.426271 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f434b0ff-2950-48a3-85a1-f33d7a078da1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.514430 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.515584 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx" event={"ID":"f34feff1-dcd4-4c93-a6f1-355d1f506425","Type":"ContainerDied","Data":"a8ee4d997f9aa27700aee58b38bc262e60d58fe1e6ec3d7fe5cc1c5de861791c"} Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.515635 4832 scope.go:117] "RemoveContainer" containerID="6d00ae9d0afa89f0065393d2534c38cbbf064fc394ee1ab6a7e0ae40e8187019" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.526943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" event={"ID":"c3abc18e-3b7e-4afe-b35b-3b619290e875","Type":"ContainerStarted","Data":"609759b192b19fa5c758a808746c06d20d1ed8f913878ec22fafcd4af6c6accd"} Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.530359 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.531144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d55c7d5-lmvd5" event={"ID":"87bce1d1-647c-4317-81eb-2fe7564306b5","Type":"ContainerDied","Data":"4e0389c5a99b5d184b4f93eee7187a3b21f7cb1042047782e2b3b6577457d719"} Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.534078 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.564993 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx"] Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.568901 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd9bf79c5-ztnrx"] Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.595261 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d55c7d5-lmvd5"] Mar 12 14:51:41 crc kubenswrapper[4832]: I0312 14:51:41.602018 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77d55c7d5-lmvd5"] Mar 12 14:51:42 crc kubenswrapper[4832]: I0312 14:51:42.625690 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bce1d1-647c-4317-81eb-2fe7564306b5" path="/var/lib/kubelet/pods/87bce1d1-647c-4317-81eb-2fe7564306b5/volumes" Mar 12 14:51:42 crc kubenswrapper[4832]: I0312 14:51:42.626668 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34feff1-dcd4-4c93-a6f1-355d1f506425" path="/var/lib/kubelet/pods/f34feff1-dcd4-4c93-a6f1-355d1f506425/volumes" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.690374 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9f874549b-rqhhz"] Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.692135 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.700632 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.700641 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.700853 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.701060 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.701087 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.701891 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.719924 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.721682 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f874549b-rqhhz"] Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.739757 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91fe89a5-54d3-407a-b25e-03c279face51-serving-cert\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.739926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvt9\" (UniqueName: \"kubernetes.io/projected/91fe89a5-54d3-407a-b25e-03c279face51-kube-api-access-lrvt9\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.739994 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-client-ca\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.740059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-proxy-ca-bundles\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.740091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-config\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.841055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-proxy-ca-bundles\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.841116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-config\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.841161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91fe89a5-54d3-407a-b25e-03c279face51-serving-cert\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.841289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrvt9\" (UniqueName: \"kubernetes.io/projected/91fe89a5-54d3-407a-b25e-03c279face51-kube-api-access-lrvt9\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.841323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-client-ca\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.842946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-proxy-ca-bundles\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.843017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-client-ca\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.843552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-config\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.864348 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91fe89a5-54d3-407a-b25e-03c279face51-serving-cert\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:43 crc kubenswrapper[4832]: I0312 14:51:43.868855 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrvt9\" (UniqueName: \"kubernetes.io/projected/91fe89a5-54d3-407a-b25e-03c279face51-kube-api-access-lrvt9\") pod \"controller-manager-9f874549b-rqhhz\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:44 crc kubenswrapper[4832]: I0312 14:51:44.017914 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.292864 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9n2m2" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.292866 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gc4cj" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.292864 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qbwsc" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.292911 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zjrmj" podUID="67ae4e40-af35-414c-8be7-4f9776319561" Mar 12 14:51:44 crc kubenswrapper[4832]: I0312 14:51:44.312887 4832 scope.go:117] "RemoveContainer" containerID="b66ca919b749c6c26113bb81a0cabde5b274f9660b424a5526cbf3614c94ff97" Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.363466 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.363943 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlr5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fwqfd_openshift-marketplace(584742b5-4cf7-4fcf-8b62-ad79df0bc737): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.365148 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fwqfd" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" Mar 12 14:51:44 crc kubenswrapper[4832]: I0312 14:51:44.495677 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 14:51:44 crc kubenswrapper[4832]: W0312 14:51:44.518981 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf434b0ff_2950_48a3_85a1_f33d7a078da1.slice/crio-bac4a43d3f5cbe9df33784ef94a86795f94b601af56bbfc272e83d65c4417062 WatchSource:0}: Error finding container bac4a43d3f5cbe9df33784ef94a86795f94b601af56bbfc272e83d65c4417062: Status 404 returned error can't find the container with id bac4a43d3f5cbe9df33784ef94a86795f94b601af56bbfc272e83d65c4417062 Mar 12 14:51:44 crc kubenswrapper[4832]: I0312 14:51:44.554996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f434b0ff-2950-48a3-85a1-f33d7a078da1","Type":"ContainerStarted","Data":"bac4a43d3f5cbe9df33784ef94a86795f94b601af56bbfc272e83d65c4417062"} Mar 12 14:51:44 crc kubenswrapper[4832]: I0312 14:51:44.572868 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p"] Mar 12 14:51:44 crc kubenswrapper[4832]: E0312 14:51:44.597359 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fwqfd" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" Mar 12 14:51:44 crc kubenswrapper[4832]: W0312 14:51:44.598196 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod246fe481_818a_4f44_8397_7723c7cd7afe.slice/crio-d6f8d5fe918d3491f65af1a24696deaa56fdc7dce17834faada450d247595d1e WatchSource:0}: Error finding container d6f8d5fe918d3491f65af1a24696deaa56fdc7dce17834faada450d247595d1e: Status 404 returned error can't find the container with id d6f8d5fe918d3491f65af1a24696deaa56fdc7dce17834faada450d247595d1e Mar 12 14:51:44 crc kubenswrapper[4832]: I0312 14:51:44.662091 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f874549b-rqhhz"] Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.312513 4832 csr.go:261] certificate signing request csr-pptlb is approved, waiting to be issued Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.320421 4832 csr.go:257] certificate signing request csr-pptlb is issued Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.565726 4832 generic.go:334] "Generic (PLEG): container finished" podID="f434b0ff-2950-48a3-85a1-f33d7a078da1" containerID="cecec763d1dfe6f09b579577be6557cdec88999a4ea368292f65fe5c8d9031ab" exitCode=0 Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.565828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f434b0ff-2950-48a3-85a1-f33d7a078da1","Type":"ContainerDied","Data":"cecec763d1dfe6f09b579577be6557cdec88999a4ea368292f65fe5c8d9031ab"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.567131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" event={"ID":"91fe89a5-54d3-407a-b25e-03c279face51","Type":"ContainerStarted","Data":"9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.567157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" event={"ID":"91fe89a5-54d3-407a-b25e-03c279face51","Type":"ContainerStarted","Data":"6e24cf1f88a5a2513b8d5f16454cb89a189f7a22ce788265493973da3e413564"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.567351 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.569170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" event={"ID":"35da9b9e-133b-4d7f-a32e-908d9fc7734b","Type":"ContainerDied","Data":"170b98eaee6906093b6abb700460c830fa3a2f8071efff79ef655720f6d81364"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.569192 4832 generic.go:334] "Generic (PLEG): container finished" podID="35da9b9e-133b-4d7f-a32e-908d9fc7734b" containerID="170b98eaee6906093b6abb700460c830fa3a2f8071efff79ef655720f6d81364" exitCode=0 Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.571547 4832 generic.go:334] "Generic (PLEG): container finished" podID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerID="c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d" exitCode=0 Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.571614 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pf4z" event={"ID":"945e7b12-b4c7-45e9-9956-6dda3eed3c62","Type":"ContainerDied","Data":"c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.573360 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.574009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" event={"ID":"c3abc18e-3b7e-4afe-b35b-3b619290e875","Type":"ContainerStarted","Data":"207521de2f39ad36398c797dc2ac8961602ac316ac64775a6346ba704371360c"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.574064 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lmjrb" event={"ID":"c3abc18e-3b7e-4afe-b35b-3b619290e875","Type":"ContainerStarted","Data":"d8957d61d1b63e35f341fc114e259d0d35c59614bbed3245f5fb786ea139e61d"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.576623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" event={"ID":"246fe481-818a-4f44-8397-7723c7cd7afe","Type":"ContainerStarted","Data":"80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.576670 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" event={"ID":"246fe481-818a-4f44-8397-7723c7cd7afe","Type":"ContainerStarted","Data":"d6f8d5fe918d3491f65af1a24696deaa56fdc7dce17834faada450d247595d1e"} Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.576877 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.589235 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.604194 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" podStartSLOduration=20.604170484 podStartE2EDuration="20.604170484s" podCreationTimestamp="2026-03-12 14:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.601105397 +0000 UTC m=+264.245119633" watchObservedRunningTime="2026-03-12 14:51:45.604170484 +0000 UTC m=+264.248184710" Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.616240 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lmjrb" podStartSLOduration=199.616224178 podStartE2EDuration="3m19.616224178s" podCreationTimestamp="2026-03-12 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.614596861 +0000 UTC m=+264.258611097" watchObservedRunningTime="2026-03-12 14:51:45.616224178 +0000 UTC m=+264.260238404" Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.651680 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" podStartSLOduration=20.651661097 podStartE2EDuration="20.651661097s" podCreationTimestamp="2026-03-12 14:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.650547796 +0000 UTC m=+264.294562022" watchObservedRunningTime="2026-03-12 14:51:45.651661097 +0000 UTC m=+264.295675323" Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.730184 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9f874549b-rqhhz"] Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.884372 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p"] Mar 12 14:51:45 crc kubenswrapper[4832]: I0312 14:51:45.999197 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.000142 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.010735 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.067371 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.067450 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-var-lock\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.067472 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kube-api-access\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.168500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.168915 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-var-lock\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.168943 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kube-api-access\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.168968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-var-lock\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.168627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.186302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kube-api-access\") pod \"installer-9-crc\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.317355 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.321786 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-01 17:10:29.411333024 +0000 UTC Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.321809 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6338h18m43.089525781s for next certificate rotation Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.493033 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.582143 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d","Type":"ContainerStarted","Data":"7ebac6c89d18b04b8f21a00b3d6462b8a537fe63fc930ec6dcc5266e1dafe2a7"} Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.588997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pf4z" event={"ID":"945e7b12-b4c7-45e9-9956-6dda3eed3c62","Type":"ContainerStarted","Data":"b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee"} Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.612793 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7pf4z" podStartSLOduration=2.853977388 podStartE2EDuration="38.612771023s" podCreationTimestamp="2026-03-12 14:51:08 +0000 UTC" firstStartedPulling="2026-03-12 14:51:10.307018773 +0000 UTC m=+228.951032989" lastFinishedPulling="2026-03-12 14:51:46.065812398 +0000 UTC m=+264.709826624" observedRunningTime="2026-03-12 14:51:46.606633178 +0000 UTC m=+265.250647424" watchObservedRunningTime="2026-03-12 14:51:46.612771023 +0000 UTC m=+265.256785249" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.795699 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.858026 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.910993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f434b0ff-2950-48a3-85a1-f33d7a078da1-kubelet-dir\") pod \"f434b0ff-2950-48a3-85a1-f33d7a078da1\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.911110 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f434b0ff-2950-48a3-85a1-f33d7a078da1-kube-api-access\") pod \"f434b0ff-2950-48a3-85a1-f33d7a078da1\" (UID: \"f434b0ff-2950-48a3-85a1-f33d7a078da1\") " Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.911117 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f434b0ff-2950-48a3-85a1-f33d7a078da1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f434b0ff-2950-48a3-85a1-f33d7a078da1" (UID: "f434b0ff-2950-48a3-85a1-f33d7a078da1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.911183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkx8k\" (UniqueName: \"kubernetes.io/projected/35da9b9e-133b-4d7f-a32e-908d9fc7734b-kube-api-access-fkx8k\") pod \"35da9b9e-133b-4d7f-a32e-908d9fc7734b\" (UID: \"35da9b9e-133b-4d7f-a32e-908d9fc7734b\") " Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.911416 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f434b0ff-2950-48a3-85a1-f33d7a078da1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.916262 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f434b0ff-2950-48a3-85a1-f33d7a078da1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f434b0ff-2950-48a3-85a1-f33d7a078da1" (UID: "f434b0ff-2950-48a3-85a1-f33d7a078da1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:46 crc kubenswrapper[4832]: I0312 14:51:46.919882 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35da9b9e-133b-4d7f-a32e-908d9fc7734b-kube-api-access-fkx8k" (OuterVolumeSpecName: "kube-api-access-fkx8k") pod "35da9b9e-133b-4d7f-a32e-908d9fc7734b" (UID: "35da9b9e-133b-4d7f-a32e-908d9fc7734b"). InnerVolumeSpecName "kube-api-access-fkx8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.012735 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkx8k\" (UniqueName: \"kubernetes.io/projected/35da9b9e-133b-4d7f-a32e-908d9fc7734b-kube-api-access-fkx8k\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.012765 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f434b0ff-2950-48a3-85a1-f33d7a078da1-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.322473 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-10 01:05:52.971918995 +0000 UTC Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.322532 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7282h14m5.649390069s for next certificate rotation Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.591523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f434b0ff-2950-48a3-85a1-f33d7a078da1","Type":"ContainerDied","Data":"bac4a43d3f5cbe9df33784ef94a86795f94b601af56bbfc272e83d65c4417062"} Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.591777 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac4a43d3f5cbe9df33784ef94a86795f94b601af56bbfc272e83d65c4417062" Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.591844 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.596733 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" event={"ID":"35da9b9e-133b-4d7f-a32e-908d9fc7734b","Type":"ContainerDied","Data":"a2e31dc1e06b8000c3f5da3c5aaa958359671b39e53887d7cef496248edd32b9"} Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.596770 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2e31dc1e06b8000c3f5da3c5aaa958359671b39e53887d7cef496248edd32b9" Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.596827 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-w9jtm" Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.603551 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d","Type":"ContainerStarted","Data":"65bddd1c9cc168b31ba8288bb95776b2ef10c9b54db4dc48a5cf0fc09d13f549"} Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.603653 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" podUID="91fe89a5-54d3-407a-b25e-03c279face51" containerName="controller-manager" containerID="cri-o://9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb" gracePeriod=30 Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.603769 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" podUID="246fe481-818a-4f44-8397-7723c7cd7afe" containerName="route-controller-manager" containerID="cri-o://80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad" gracePeriod=30 Mar 12 14:51:47 crc kubenswrapper[4832]: I0312 14:51:47.620129 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.620112945 podStartE2EDuration="2.620112945s" podCreationTimestamp="2026-03-12 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:47.616658307 +0000 UTC m=+266.260672533" watchObservedRunningTime="2026-03-12 14:51:47.620112945 +0000 UTC m=+266.264127171" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.012039 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.016464 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127302 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-config\") pod \"91fe89a5-54d3-407a-b25e-03c279face51\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127352 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/246fe481-818a-4f44-8397-7723c7cd7afe-serving-cert\") pod \"246fe481-818a-4f44-8397-7723c7cd7afe\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127369 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-config\") pod \"246fe481-818a-4f44-8397-7723c7cd7afe\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127411 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-client-ca\") pod \"91fe89a5-54d3-407a-b25e-03c279face51\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127427 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-client-ca\") pod \"246fe481-818a-4f44-8397-7723c7cd7afe\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91fe89a5-54d3-407a-b25e-03c279face51-serving-cert\") pod \"91fe89a5-54d3-407a-b25e-03c279face51\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127474 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8kvl\" (UniqueName: \"kubernetes.io/projected/246fe481-818a-4f44-8397-7723c7cd7afe-kube-api-access-p8kvl\") pod \"246fe481-818a-4f44-8397-7723c7cd7afe\" (UID: \"246fe481-818a-4f44-8397-7723c7cd7afe\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127542 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrvt9\" (UniqueName: \"kubernetes.io/projected/91fe89a5-54d3-407a-b25e-03c279face51-kube-api-access-lrvt9\") pod \"91fe89a5-54d3-407a-b25e-03c279face51\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.127564 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-proxy-ca-bundles\") pod \"91fe89a5-54d3-407a-b25e-03c279face51\" (UID: \"91fe89a5-54d3-407a-b25e-03c279face51\") " Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.128182 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "91fe89a5-54d3-407a-b25e-03c279face51" (UID: "91fe89a5-54d3-407a-b25e-03c279face51"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.128556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-config" (OuterVolumeSpecName: "config") pod "91fe89a5-54d3-407a-b25e-03c279face51" (UID: "91fe89a5-54d3-407a-b25e-03c279face51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.129586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-client-ca" (OuterVolumeSpecName: "client-ca") pod "246fe481-818a-4f44-8397-7723c7cd7afe" (UID: "246fe481-818a-4f44-8397-7723c7cd7afe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.129694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-config" (OuterVolumeSpecName: "config") pod "246fe481-818a-4f44-8397-7723c7cd7afe" (UID: "246fe481-818a-4f44-8397-7723c7cd7afe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.129776 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-client-ca" (OuterVolumeSpecName: "client-ca") pod "91fe89a5-54d3-407a-b25e-03c279face51" (UID: "91fe89a5-54d3-407a-b25e-03c279face51"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.133277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/246fe481-818a-4f44-8397-7723c7cd7afe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "246fe481-818a-4f44-8397-7723c7cd7afe" (UID: "246fe481-818a-4f44-8397-7723c7cd7afe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.133328 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246fe481-818a-4f44-8397-7723c7cd7afe-kube-api-access-p8kvl" (OuterVolumeSpecName: "kube-api-access-p8kvl") pod "246fe481-818a-4f44-8397-7723c7cd7afe" (UID: "246fe481-818a-4f44-8397-7723c7cd7afe"). InnerVolumeSpecName "kube-api-access-p8kvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.133871 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fe89a5-54d3-407a-b25e-03c279face51-kube-api-access-lrvt9" (OuterVolumeSpecName: "kube-api-access-lrvt9") pod "91fe89a5-54d3-407a-b25e-03c279face51" (UID: "91fe89a5-54d3-407a-b25e-03c279face51"). InnerVolumeSpecName "kube-api-access-lrvt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.133966 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fe89a5-54d3-407a-b25e-03c279face51-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "91fe89a5-54d3-407a-b25e-03c279face51" (UID: "91fe89a5-54d3-407a-b25e-03c279face51"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228609 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228645 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/246fe481-818a-4f44-8397-7723c7cd7afe-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228655 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228664 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228672 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/246fe481-818a-4f44-8397-7723c7cd7afe-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228679 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91fe89a5-54d3-407a-b25e-03c279face51-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228687 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8kvl\" (UniqueName: \"kubernetes.io/projected/246fe481-818a-4f44-8397-7723c7cd7afe-kube-api-access-p8kvl\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228697 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrvt9\" (UniqueName: \"kubernetes.io/projected/91fe89a5-54d3-407a-b25e-03c279face51-kube-api-access-lrvt9\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.228704 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91fe89a5-54d3-407a-b25e-03c279face51-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.617279 4832 generic.go:334] "Generic (PLEG): container finished" podID="246fe481-818a-4f44-8397-7723c7cd7afe" containerID="80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad" exitCode=0 Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.617352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" event={"ID":"246fe481-818a-4f44-8397-7723c7cd7afe","Type":"ContainerDied","Data":"80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad"} Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.617380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" event={"ID":"246fe481-818a-4f44-8397-7723c7cd7afe","Type":"ContainerDied","Data":"d6f8d5fe918d3491f65af1a24696deaa56fdc7dce17834faada450d247595d1e"} Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.617398 4832 scope.go:117] "RemoveContainer" containerID="80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.617548 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.623893 4832 generic.go:334] "Generic (PLEG): container finished" podID="91fe89a5-54d3-407a-b25e-03c279face51" containerID="9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb" exitCode=0 Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.624122 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.634959 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" event={"ID":"91fe89a5-54d3-407a-b25e-03c279face51","Type":"ContainerDied","Data":"9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb"} Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.634999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f874549b-rqhhz" event={"ID":"91fe89a5-54d3-407a-b25e-03c279face51","Type":"ContainerDied","Data":"6e24cf1f88a5a2513b8d5f16454cb89a189f7a22ce788265493973da3e413564"} Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.642772 4832 scope.go:117] "RemoveContainer" containerID="80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad" Mar 12 14:51:48 crc kubenswrapper[4832]: E0312 14:51:48.643593 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad\": container with ID starting with 80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad not found: ID does not exist" containerID="80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.643642 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad"} err="failed to get container status \"80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad\": rpc error: code = NotFound desc = could not find container \"80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad\": container with ID starting with 80c73debbdb2449aaa9af2c6f5645b22c9ce1424a277b861fb59ee950a1889ad not found: ID does not exist" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.643673 4832 scope.go:117] "RemoveContainer" containerID="9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.667943 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p"] Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.672728 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b67968f5c-q5q4p"] Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.678122 4832 scope.go:117] "RemoveContainer" containerID="9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb" Mar 12 14:51:48 crc kubenswrapper[4832]: E0312 14:51:48.679649 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb\": container with ID starting with 9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb not found: ID does not exist" containerID="9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.679707 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb"} err="failed to get container status \"9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb\": rpc error: code = NotFound desc = could not find container \"9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb\": container with ID starting with 9e0959ed97f842aeb663888a6141042562de70c224f09936fb0617176a93c5fb not found: ID does not exist" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.679776 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9f874549b-rqhhz"] Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.683184 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9f874549b-rqhhz"] Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.727598 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:48 crc kubenswrapper[4832]: I0312 14:51:48.727667 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.692481 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cd6949dbf-2sct6"] Mar 12 14:51:49 crc kubenswrapper[4832]: E0312 14:51:49.692794 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fe89a5-54d3-407a-b25e-03c279face51" containerName="controller-manager" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.692806 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fe89a5-54d3-407a-b25e-03c279face51" containerName="controller-manager" Mar 12 14:51:49 crc kubenswrapper[4832]: E0312 14:51:49.692831 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35da9b9e-133b-4d7f-a32e-908d9fc7734b" containerName="oc" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.692837 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="35da9b9e-133b-4d7f-a32e-908d9fc7734b" containerName="oc" Mar 12 14:51:49 crc kubenswrapper[4832]: E0312 14:51:49.692847 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f434b0ff-2950-48a3-85a1-f33d7a078da1" containerName="pruner" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.692853 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f434b0ff-2950-48a3-85a1-f33d7a078da1" containerName="pruner" Mar 12 14:51:49 crc kubenswrapper[4832]: E0312 14:51:49.692861 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246fe481-818a-4f44-8397-7723c7cd7afe" containerName="route-controller-manager" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.692866 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="246fe481-818a-4f44-8397-7723c7cd7afe" containerName="route-controller-manager" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.692986 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="246fe481-818a-4f44-8397-7723c7cd7afe" containerName="route-controller-manager" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.693001 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f434b0ff-2950-48a3-85a1-f33d7a078da1" containerName="pruner" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.693011 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="35da9b9e-133b-4d7f-a32e-908d9fc7734b" containerName="oc" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.693019 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fe89a5-54d3-407a-b25e-03c279face51" containerName="controller-manager" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.693498 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.695752 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.695752 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr"] Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.695991 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.696454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.697824 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.698174 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.698666 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.699019 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.699529 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.700176 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.700776 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.701445 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.701623 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.701851 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.711626 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6949dbf-2sct6"] Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.713021 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.728708 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr"] Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.750901 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-proxy-ca-bundles\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.750980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xzdq\" (UniqueName: \"kubernetes.io/projected/142c1439-9756-40be-8305-1537d7a59ef4-kube-api-access-7xzdq\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.751014 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9cfl\" (UniqueName: \"kubernetes.io/projected/a890b66d-31e8-4219-84e9-72f40af3df39-kube-api-access-k9cfl\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.751041 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a890b66d-31e8-4219-84e9-72f40af3df39-serving-cert\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.751213 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-client-ca\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.751253 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-client-ca\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.751352 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-config\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.751415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-config\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.751482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/142c1439-9756-40be-8305-1537d7a59ef4-serving-cert\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.843922 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7pf4z" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="registry-server" probeResult="failure" output=< Mar 12 14:51:49 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Mar 12 14:51:49 crc kubenswrapper[4832]: > Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.852902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-proxy-ca-bundles\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853019 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xzdq\" (UniqueName: \"kubernetes.io/projected/142c1439-9756-40be-8305-1537d7a59ef4-kube-api-access-7xzdq\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9cfl\" (UniqueName: \"kubernetes.io/projected/a890b66d-31e8-4219-84e9-72f40af3df39-kube-api-access-k9cfl\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a890b66d-31e8-4219-84e9-72f40af3df39-serving-cert\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-client-ca\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-client-ca\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-config\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-config\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.853404 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/142c1439-9756-40be-8305-1537d7a59ef4-serving-cert\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.854380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-proxy-ca-bundles\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.854385 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-client-ca\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.854742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-client-ca\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.854982 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-config\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.856328 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-config\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.858603 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/142c1439-9756-40be-8305-1537d7a59ef4-serving-cert\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.863347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a890b66d-31e8-4219-84e9-72f40af3df39-serving-cert\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.879386 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xzdq\" (UniqueName: \"kubernetes.io/projected/142c1439-9756-40be-8305-1537d7a59ef4-kube-api-access-7xzdq\") pod \"route-controller-manager-5dcf5fbfb7-72hcr\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:49 crc kubenswrapper[4832]: I0312 14:51:49.879627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9cfl\" (UniqueName: \"kubernetes.io/projected/a890b66d-31e8-4219-84e9-72f40af3df39-kube-api-access-k9cfl\") pod \"controller-manager-6cd6949dbf-2sct6\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.024947 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.033794 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.227565 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6949dbf-2sct6"] Mar 12 14:51:50 crc kubenswrapper[4832]: W0312 14:51:50.239490 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda890b66d_31e8_4219_84e9_72f40af3df39.slice/crio-82b2643b6ec16b53f4e5bfea829e7e8766b9de9cc7fc0ac593f6667755f12ba2 WatchSource:0}: Error finding container 82b2643b6ec16b53f4e5bfea829e7e8766b9de9cc7fc0ac593f6667755f12ba2: Status 404 returned error can't find the container with id 82b2643b6ec16b53f4e5bfea829e7e8766b9de9cc7fc0ac593f6667755f12ba2 Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.279981 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr"] Mar 12 14:51:50 crc kubenswrapper[4832]: W0312 14:51:50.293361 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod142c1439_9756_40be_8305_1537d7a59ef4.slice/crio-972603947b3e262291c88c93735e309ac66f9927c53d68338ed6d91406d31309 WatchSource:0}: Error finding container 972603947b3e262291c88c93735e309ac66f9927c53d68338ed6d91406d31309: Status 404 returned error can't find the container with id 972603947b3e262291c88c93735e309ac66f9927c53d68338ed6d91406d31309 Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.634267 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="246fe481-818a-4f44-8397-7723c7cd7afe" path="/var/lib/kubelet/pods/246fe481-818a-4f44-8397-7723c7cd7afe/volumes" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.635318 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fe89a5-54d3-407a-b25e-03c279face51" path="/var/lib/kubelet/pods/91fe89a5-54d3-407a-b25e-03c279face51/volumes" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.640663 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" event={"ID":"142c1439-9756-40be-8305-1537d7a59ef4","Type":"ContainerStarted","Data":"f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0"} Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.641214 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.641253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" event={"ID":"142c1439-9756-40be-8305-1537d7a59ef4","Type":"ContainerStarted","Data":"972603947b3e262291c88c93735e309ac66f9927c53d68338ed6d91406d31309"} Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.644525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" event={"ID":"a890b66d-31e8-4219-84e9-72f40af3df39","Type":"ContainerStarted","Data":"d34d1a2ec62d53ec97e18ddb38d07c1474923a09906cd55853ad1db0a3c71922"} Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.644558 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" event={"ID":"a890b66d-31e8-4219-84e9-72f40af3df39","Type":"ContainerStarted","Data":"82b2643b6ec16b53f4e5bfea829e7e8766b9de9cc7fc0ac593f6667755f12ba2"} Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.644745 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.669501 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" podStartSLOduration=5.669484573 podStartE2EDuration="5.669484573s" podCreationTimestamp="2026-03-12 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:50.668109134 +0000 UTC m=+269.312123370" watchObservedRunningTime="2026-03-12 14:51:50.669484573 +0000 UTC m=+269.313498809" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.671949 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:51:50 crc kubenswrapper[4832]: I0312 14:51:50.681997 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" podStartSLOduration=5.681979469 podStartE2EDuration="5.681979469s" podCreationTimestamp="2026-03-12 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:50.679769766 +0000 UTC m=+269.323784002" watchObservedRunningTime="2026-03-12 14:51:50.681979469 +0000 UTC m=+269.325993695" Mar 12 14:51:51 crc kubenswrapper[4832]: I0312 14:51:51.026256 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:51:55 crc kubenswrapper[4832]: I0312 14:51:55.670027 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnxm6" event={"ID":"986f5b8c-a467-455c-9b4c-e53572535143","Type":"ContainerStarted","Data":"f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234"} Mar 12 14:51:55 crc kubenswrapper[4832]: I0312 14:51:55.671907 4832 generic.go:334] "Generic (PLEG): container finished" podID="17368088-aec0-4319-8575-045b54487a1f" containerID="d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81" exitCode=0 Mar 12 14:51:55 crc kubenswrapper[4832]: I0312 14:51:55.671946 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xql8n" event={"ID":"17368088-aec0-4319-8575-045b54487a1f","Type":"ContainerDied","Data":"d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81"} Mar 12 14:51:56 crc kubenswrapper[4832]: I0312 14:51:56.314753 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:51:56 crc kubenswrapper[4832]: I0312 14:51:56.314824 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:51:56 crc kubenswrapper[4832]: I0312 14:51:56.314878 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:51:56 crc kubenswrapper[4832]: I0312 14:51:56.315485 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:51:56 crc kubenswrapper[4832]: I0312 14:51:56.315589 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70" gracePeriod=600 Mar 12 14:51:56 crc kubenswrapper[4832]: I0312 14:51:56.679181 4832 generic.go:334] "Generic (PLEG): container finished" podID="986f5b8c-a467-455c-9b4c-e53572535143" containerID="f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234" exitCode=0 Mar 12 14:51:56 crc kubenswrapper[4832]: I0312 14:51:56.679228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnxm6" event={"ID":"986f5b8c-a467-455c-9b4c-e53572535143","Type":"ContainerDied","Data":"f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234"} Mar 12 14:51:57 crc kubenswrapper[4832]: I0312 14:51:57.685860 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70" exitCode=0 Mar 12 14:51:57 crc kubenswrapper[4832]: I0312 14:51:57.686600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70"} Mar 12 14:51:57 crc kubenswrapper[4832]: I0312 14:51:57.687422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"db1ee485f07778922ad94a6aead05c59f51b934d1a75207bc586280604f97237"} Mar 12 14:51:57 crc kubenswrapper[4832]: I0312 14:51:57.689129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwqfd" event={"ID":"584742b5-4cf7-4fcf-8b62-ad79df0bc737","Type":"ContainerStarted","Data":"328f1e96242f470743c58dbfcf9495770a5deb833c060f0ae1c3080f8736b0cf"} Mar 12 14:51:57 crc kubenswrapper[4832]: I0312 14:51:57.693118 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xql8n" event={"ID":"17368088-aec0-4319-8575-045b54487a1f","Type":"ContainerStarted","Data":"20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7"} Mar 12 14:51:57 crc kubenswrapper[4832]: I0312 14:51:57.746816 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xql8n" podStartSLOduration=2.3292637210000002 podStartE2EDuration="52.746796701s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="2026-03-12 14:51:07.03333727 +0000 UTC m=+225.677351506" lastFinishedPulling="2026-03-12 14:51:57.45087026 +0000 UTC m=+276.094884486" observedRunningTime="2026-03-12 14:51:57.744685521 +0000 UTC m=+276.388699767" watchObservedRunningTime="2026-03-12 14:51:57.746796701 +0000 UTC m=+276.390810927" Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.701449 4832 generic.go:334] "Generic (PLEG): container finished" podID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerID="328f1e96242f470743c58dbfcf9495770a5deb833c060f0ae1c3080f8736b0cf" exitCode=0 Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.701533 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwqfd" event={"ID":"584742b5-4cf7-4fcf-8b62-ad79df0bc737","Type":"ContainerDied","Data":"328f1e96242f470743c58dbfcf9495770a5deb833c060f0ae1c3080f8736b0cf"} Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.704734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnxm6" event={"ID":"986f5b8c-a467-455c-9b4c-e53572535143","Type":"ContainerStarted","Data":"9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88"} Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.706577 4832 generic.go:334] "Generic (PLEG): container finished" podID="7fdc1c63-8a73-405f-aede-75834651cccc" containerID="cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd" exitCode=0 Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.706603 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qbwsc" event={"ID":"7fdc1c63-8a73-405f-aede-75834651cccc","Type":"ContainerDied","Data":"cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd"} Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.742564 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xnxm6" podStartSLOduration=3.326132119 podStartE2EDuration="51.742540774s" podCreationTimestamp="2026-03-12 14:51:07 +0000 UTC" firstStartedPulling="2026-03-12 14:51:09.324000724 +0000 UTC m=+227.968014950" lastFinishedPulling="2026-03-12 14:51:57.740409379 +0000 UTC m=+276.384423605" observedRunningTime="2026-03-12 14:51:58.738080727 +0000 UTC m=+277.382094953" watchObservedRunningTime="2026-03-12 14:51:58.742540774 +0000 UTC m=+277.386555020" Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.789312 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:58 crc kubenswrapper[4832]: I0312 14:51:58.828642 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:51:59 crc kubenswrapper[4832]: I0312 14:51:59.728629 4832 generic.go:334] "Generic (PLEG): container finished" podID="bb45efb5-4239-4b47-9664-12fd61be0894" containerID="6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6" exitCode=0 Mar 12 14:51:59 crc kubenswrapper[4832]: I0312 14:51:59.728999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n2m2" event={"ID":"bb45efb5-4239-4b47-9664-12fd61be0894","Type":"ContainerDied","Data":"6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6"} Mar 12 14:51:59 crc kubenswrapper[4832]: I0312 14:51:59.739582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qbwsc" event={"ID":"7fdc1c63-8a73-405f-aede-75834651cccc","Type":"ContainerStarted","Data":"9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957"} Mar 12 14:51:59 crc kubenswrapper[4832]: I0312 14:51:59.741563 4832 generic.go:334] "Generic (PLEG): container finished" podID="28ad10d5-8a9a-418b-af56-da46474279fe" containerID="9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55" exitCode=0 Mar 12 14:51:59 crc kubenswrapper[4832]: I0312 14:51:59.741631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc4cj" event={"ID":"28ad10d5-8a9a-418b-af56-da46474279fe","Type":"ContainerDied","Data":"9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55"} Mar 12 14:51:59 crc kubenswrapper[4832]: I0312 14:51:59.766095 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qbwsc" podStartSLOduration=2.708537138 podStartE2EDuration="54.766054558s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="2026-03-12 14:51:07.031123446 +0000 UTC m=+225.675137672" lastFinishedPulling="2026-03-12 14:51:59.088640866 +0000 UTC m=+277.732655092" observedRunningTime="2026-03-12 14:51:59.762425254 +0000 UTC m=+278.406439480" watchObservedRunningTime="2026-03-12 14:51:59.766054558 +0000 UTC m=+278.410068784" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.131072 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555452-gbgxl"] Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.131867 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-gbgxl" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.133552 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.133595 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.133815 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.136906 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-gbgxl"] Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.195378 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz56w\" (UniqueName: \"kubernetes.io/projected/eba41832-c6df-4b39-a585-a27d72b2d7fd-kube-api-access-jz56w\") pod \"auto-csr-approver-29555452-gbgxl\" (UID: \"eba41832-c6df-4b39-a585-a27d72b2d7fd\") " pod="openshift-infra/auto-csr-approver-29555452-gbgxl" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.298185 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz56w\" (UniqueName: \"kubernetes.io/projected/eba41832-c6df-4b39-a585-a27d72b2d7fd-kube-api-access-jz56w\") pod \"auto-csr-approver-29555452-gbgxl\" (UID: \"eba41832-c6df-4b39-a585-a27d72b2d7fd\") " pod="openshift-infra/auto-csr-approver-29555452-gbgxl" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.318628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz56w\" (UniqueName: \"kubernetes.io/projected/eba41832-c6df-4b39-a585-a27d72b2d7fd-kube-api-access-jz56w\") pod \"auto-csr-approver-29555452-gbgxl\" (UID: \"eba41832-c6df-4b39-a585-a27d72b2d7fd\") " pod="openshift-infra/auto-csr-approver-29555452-gbgxl" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.474869 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-gbgxl" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.748873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc4cj" event={"ID":"28ad10d5-8a9a-418b-af56-da46474279fe","Type":"ContainerStarted","Data":"29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e"} Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.751694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwqfd" event={"ID":"584742b5-4cf7-4fcf-8b62-ad79df0bc737","Type":"ContainerStarted","Data":"9924cb24e4aead9594b1538e4d5293977102a815273ed377c0a3dddf702eb957"} Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.757740 4832 generic.go:334] "Generic (PLEG): container finished" podID="67ae4e40-af35-414c-8be7-4f9776319561" containerID="2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd" exitCode=0 Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.757786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjrmj" event={"ID":"67ae4e40-af35-414c-8be7-4f9776319561","Type":"ContainerDied","Data":"2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd"} Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.775012 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fwqfd" podStartSLOduration=3.909612986 podStartE2EDuration="52.774992826s" podCreationTimestamp="2026-03-12 14:51:08 +0000 UTC" firstStartedPulling="2026-03-12 14:51:11.324500834 +0000 UTC m=+229.968515060" lastFinishedPulling="2026-03-12 14:52:00.189880684 +0000 UTC m=+278.833894900" observedRunningTime="2026-03-12 14:52:00.772645829 +0000 UTC m=+279.416660065" watchObservedRunningTime="2026-03-12 14:52:00.774992826 +0000 UTC m=+279.419007062" Mar 12 14:52:00 crc kubenswrapper[4832]: I0312 14:52:00.854968 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-gbgxl"] Mar 12 14:52:01 crc kubenswrapper[4832]: I0312 14:52:01.765267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n2m2" event={"ID":"bb45efb5-4239-4b47-9664-12fd61be0894","Type":"ContainerStarted","Data":"88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965"} Mar 12 14:52:01 crc kubenswrapper[4832]: I0312 14:52:01.767713 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555452-gbgxl" event={"ID":"eba41832-c6df-4b39-a585-a27d72b2d7fd","Type":"ContainerStarted","Data":"053022b81f883e7bfd2d892ea6eaec98692c5ae5f431cabc1c967b21c531a395"} Mar 12 14:52:01 crc kubenswrapper[4832]: I0312 14:52:01.787075 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9n2m2" podStartSLOduration=3.426058422 podStartE2EDuration="56.787061513s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="2026-03-12 14:51:08.079482017 +0000 UTC m=+226.723496243" lastFinishedPulling="2026-03-12 14:52:01.440485108 +0000 UTC m=+280.084499334" observedRunningTime="2026-03-12 14:52:01.78590097 +0000 UTC m=+280.429915206" watchObservedRunningTime="2026-03-12 14:52:01.787061513 +0000 UTC m=+280.431075739" Mar 12 14:52:01 crc kubenswrapper[4832]: I0312 14:52:01.799607 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gc4cj" podStartSLOduration=3.546128702 podStartE2EDuration="56.799564809s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="2026-03-12 14:51:07.060667567 +0000 UTC m=+225.704681793" lastFinishedPulling="2026-03-12 14:52:00.314103684 +0000 UTC m=+278.958117900" observedRunningTime="2026-03-12 14:52:01.797377727 +0000 UTC m=+280.441391953" watchObservedRunningTime="2026-03-12 14:52:01.799564809 +0000 UTC m=+280.443579035" Mar 12 14:52:02 crc kubenswrapper[4832]: I0312 14:52:02.774494 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjrmj" event={"ID":"67ae4e40-af35-414c-8be7-4f9776319561","Type":"ContainerStarted","Data":"317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c"} Mar 12 14:52:04 crc kubenswrapper[4832]: I0312 14:52:04.788368 4832 generic.go:334] "Generic (PLEG): container finished" podID="eba41832-c6df-4b39-a585-a27d72b2d7fd" containerID="46394632e85d3cb84f266c578a9b706e26c0e47c333ccad5557a6b380f5c8c7c" exitCode=0 Mar 12 14:52:04 crc kubenswrapper[4832]: I0312 14:52:04.788562 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555452-gbgxl" event={"ID":"eba41832-c6df-4b39-a585-a27d72b2d7fd","Type":"ContainerDied","Data":"46394632e85d3cb84f266c578a9b706e26c0e47c333ccad5557a6b380f5c8c7c"} Mar 12 14:52:04 crc kubenswrapper[4832]: I0312 14:52:04.802390 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zjrmj" podStartSLOduration=5.344821027 podStartE2EDuration="57.80237452s" podCreationTimestamp="2026-03-12 14:51:07 +0000 UTC" firstStartedPulling="2026-03-12 14:51:09.171133567 +0000 UTC m=+227.815147793" lastFinishedPulling="2026-03-12 14:52:01.62868706 +0000 UTC m=+280.272701286" observedRunningTime="2026-03-12 14:52:02.798820642 +0000 UTC m=+281.442834878" watchObservedRunningTime="2026-03-12 14:52:04.80237452 +0000 UTC m=+283.446388746" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.497941 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.497981 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.543707 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.696966 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.697980 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.703320 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6949dbf-2sct6"] Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.703736 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" podUID="a890b66d-31e8-4219-84e9-72f40af3df39" containerName="controller-manager" containerID="cri-o://d34d1a2ec62d53ec97e18ddb38d07c1474923a09906cd55853ad1db0a3c71922" gracePeriod=30 Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.709181 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr"] Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.709412 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" podUID="142c1439-9756-40be-8305-1537d7a59ef4" containerName="route-controller-manager" containerID="cri-o://f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0" gracePeriod=30 Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.749150 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.841685 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.859633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.897208 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.897263 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:52:05 crc kubenswrapper[4832]: I0312 14:52:05.951380 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.185810 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.185846 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.221022 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.322532 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-gbgxl" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.375501 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz56w\" (UniqueName: \"kubernetes.io/projected/eba41832-c6df-4b39-a585-a27d72b2d7fd-kube-api-access-jz56w\") pod \"eba41832-c6df-4b39-a585-a27d72b2d7fd\" (UID: \"eba41832-c6df-4b39-a585-a27d72b2d7fd\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.386748 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba41832-c6df-4b39-a585-a27d72b2d7fd-kube-api-access-jz56w" (OuterVolumeSpecName: "kube-api-access-jz56w") pod "eba41832-c6df-4b39-a585-a27d72b2d7fd" (UID: "eba41832-c6df-4b39-a585-a27d72b2d7fd"). InnerVolumeSpecName "kube-api-access-jz56w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.477661 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz56w\" (UniqueName: \"kubernetes.io/projected/eba41832-c6df-4b39-a585-a27d72b2d7fd-kube-api-access-jz56w\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.756466 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.781361 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xzdq\" (UniqueName: \"kubernetes.io/projected/142c1439-9756-40be-8305-1537d7a59ef4-kube-api-access-7xzdq\") pod \"142c1439-9756-40be-8305-1537d7a59ef4\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.781450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-client-ca\") pod \"142c1439-9756-40be-8305-1537d7a59ef4\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.781561 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/142c1439-9756-40be-8305-1537d7a59ef4-serving-cert\") pod \"142c1439-9756-40be-8305-1537d7a59ef4\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.781597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-config\") pod \"142c1439-9756-40be-8305-1537d7a59ef4\" (UID: \"142c1439-9756-40be-8305-1537d7a59ef4\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.782242 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-client-ca" (OuterVolumeSpecName: "client-ca") pod "142c1439-9756-40be-8305-1537d7a59ef4" (UID: "142c1439-9756-40be-8305-1537d7a59ef4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.782350 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-config" (OuterVolumeSpecName: "config") pod "142c1439-9756-40be-8305-1537d7a59ef4" (UID: "142c1439-9756-40be-8305-1537d7a59ef4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.785388 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142c1439-9756-40be-8305-1537d7a59ef4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "142c1439-9756-40be-8305-1537d7a59ef4" (UID: "142c1439-9756-40be-8305-1537d7a59ef4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.785602 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142c1439-9756-40be-8305-1537d7a59ef4-kube-api-access-7xzdq" (OuterVolumeSpecName: "kube-api-access-7xzdq") pod "142c1439-9756-40be-8305-1537d7a59ef4" (UID: "142c1439-9756-40be-8305-1537d7a59ef4"). InnerVolumeSpecName "kube-api-access-7xzdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.799514 4832 generic.go:334] "Generic (PLEG): container finished" podID="142c1439-9756-40be-8305-1537d7a59ef4" containerID="f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0" exitCode=0 Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.799573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" event={"ID":"142c1439-9756-40be-8305-1537d7a59ef4","Type":"ContainerDied","Data":"f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0"} Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.799599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" event={"ID":"142c1439-9756-40be-8305-1537d7a59ef4","Type":"ContainerDied","Data":"972603947b3e262291c88c93735e309ac66f9927c53d68338ed6d91406d31309"} Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.799616 4832 scope.go:117] "RemoveContainer" containerID="f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.799725 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.804484 4832 generic.go:334] "Generic (PLEG): container finished" podID="a890b66d-31e8-4219-84e9-72f40af3df39" containerID="d34d1a2ec62d53ec97e18ddb38d07c1474923a09906cd55853ad1db0a3c71922" exitCode=0 Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.804613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" event={"ID":"a890b66d-31e8-4219-84e9-72f40af3df39","Type":"ContainerDied","Data":"d34d1a2ec62d53ec97e18ddb38d07c1474923a09906cd55853ad1db0a3c71922"} Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.806538 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-gbgxl" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.806568 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555452-gbgxl" event={"ID":"eba41832-c6df-4b39-a585-a27d72b2d7fd","Type":"ContainerDied","Data":"053022b81f883e7bfd2d892ea6eaec98692c5ae5f431cabc1c967b21c531a395"} Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.806585 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053022b81f883e7bfd2d892ea6eaec98692c5ae5f431cabc1c967b21c531a395" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.848010 4832 scope.go:117] "RemoveContainer" containerID="f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0" Mar 12 14:52:06 crc kubenswrapper[4832]: E0312 14:52:06.848562 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0\": container with ID starting with f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0 not found: ID does not exist" containerID="f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.848791 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0"} err="failed to get container status \"f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0\": rpc error: code = NotFound desc = could not find container \"f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0\": container with ID starting with f4d46072fffd987aa8c70db380a8eb62417e58da6edea894c2084d99ebc21cc0 not found: ID does not exist" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.871696 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.873794 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr"] Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.876890 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.878142 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcf5fbfb7-72hcr"] Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.883159 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.883190 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/142c1439-9756-40be-8305-1537d7a59ef4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.883202 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/142c1439-9756-40be-8305-1537d7a59ef4-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.883214 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xzdq\" (UniqueName: \"kubernetes.io/projected/142c1439-9756-40be-8305-1537d7a59ef4-kube-api-access-7xzdq\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.927525 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.984061 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9cfl\" (UniqueName: \"kubernetes.io/projected/a890b66d-31e8-4219-84e9-72f40af3df39-kube-api-access-k9cfl\") pod \"a890b66d-31e8-4219-84e9-72f40af3df39\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.984172 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-config\") pod \"a890b66d-31e8-4219-84e9-72f40af3df39\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.984223 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-client-ca\") pod \"a890b66d-31e8-4219-84e9-72f40af3df39\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.984249 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a890b66d-31e8-4219-84e9-72f40af3df39-serving-cert\") pod \"a890b66d-31e8-4219-84e9-72f40af3df39\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.984312 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-proxy-ca-bundles\") pod \"a890b66d-31e8-4219-84e9-72f40af3df39\" (UID: \"a890b66d-31e8-4219-84e9-72f40af3df39\") " Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.985191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a890b66d-31e8-4219-84e9-72f40af3df39" (UID: "a890b66d-31e8-4219-84e9-72f40af3df39"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.987805 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a890b66d-31e8-4219-84e9-72f40af3df39-kube-api-access-k9cfl" (OuterVolumeSpecName: "kube-api-access-k9cfl") pod "a890b66d-31e8-4219-84e9-72f40af3df39" (UID: "a890b66d-31e8-4219-84e9-72f40af3df39"). InnerVolumeSpecName "kube-api-access-k9cfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.988210 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-config" (OuterVolumeSpecName: "config") pod "a890b66d-31e8-4219-84e9-72f40af3df39" (UID: "a890b66d-31e8-4219-84e9-72f40af3df39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.988627 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-client-ca" (OuterVolumeSpecName: "client-ca") pod "a890b66d-31e8-4219-84e9-72f40af3df39" (UID: "a890b66d-31e8-4219-84e9-72f40af3df39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:06 crc kubenswrapper[4832]: I0312 14:52:06.990731 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a890b66d-31e8-4219-84e9-72f40af3df39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a890b66d-31e8-4219-84e9-72f40af3df39" (UID: "a890b66d-31e8-4219-84e9-72f40af3df39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.085859 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.086077 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a890b66d-31e8-4219-84e9-72f40af3df39-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.086086 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.086094 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a890b66d-31e8-4219-84e9-72f40af3df39-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.086103 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9cfl\" (UniqueName: \"kubernetes.io/projected/a890b66d-31e8-4219-84e9-72f40af3df39-kube-api-access-k9cfl\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.707895 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz"] Mar 12 14:52:07 crc kubenswrapper[4832]: E0312 14:52:07.708193 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a890b66d-31e8-4219-84e9-72f40af3df39" containerName="controller-manager" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.708214 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a890b66d-31e8-4219-84e9-72f40af3df39" containerName="controller-manager" Mar 12 14:52:07 crc kubenswrapper[4832]: E0312 14:52:07.708233 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba41832-c6df-4b39-a585-a27d72b2d7fd" containerName="oc" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.708241 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba41832-c6df-4b39-a585-a27d72b2d7fd" containerName="oc" Mar 12 14:52:07 crc kubenswrapper[4832]: E0312 14:52:07.708251 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142c1439-9756-40be-8305-1537d7a59ef4" containerName="route-controller-manager" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.708258 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="142c1439-9756-40be-8305-1537d7a59ef4" containerName="route-controller-manager" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.708391 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="142c1439-9756-40be-8305-1537d7a59ef4" containerName="route-controller-manager" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.708411 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a890b66d-31e8-4219-84e9-72f40af3df39" containerName="controller-manager" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.708422 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba41832-c6df-4b39-a585-a27d72b2d7fd" containerName="oc" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.708823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.709875 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.710350 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.712089 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw"] Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.712832 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.715547 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.715554 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.715630 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.715663 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.716427 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.716632 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.727308 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz"] Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.731990 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw"] Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.772810 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.794175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5745q\" (UniqueName: \"kubernetes.io/projected/4e67d890-3131-454a-bd9c-89b8212c8842-kube-api-access-5745q\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.794427 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c275e157-9561-4355-a81b-a14441a2940f-serving-cert\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.794567 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-client-ca\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.794802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-client-ca\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.794882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-proxy-ca-bundles\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.794952 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-config\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.795024 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-config\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.795067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67d890-3131-454a-bd9c-89b8212c8842-serving-cert\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.795173 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27w4\" (UniqueName: \"kubernetes.io/projected/c275e157-9561-4355-a81b-a14441a2940f-kube-api-access-p27w4\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.813693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" event={"ID":"a890b66d-31e8-4219-84e9-72f40af3df39","Type":"ContainerDied","Data":"82b2643b6ec16b53f4e5bfea829e7e8766b9de9cc7fc0ac593f6667755f12ba2"} Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.813739 4832 scope.go:117] "RemoveContainer" containerID="d34d1a2ec62d53ec97e18ddb38d07c1474923a09906cd55853ad1db0a3c71922" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.813933 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6949dbf-2sct6" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.842384 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6949dbf-2sct6"] Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.845191 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6949dbf-2sct6"] Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.858907 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.863089 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gc4cj"] Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.895819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c275e157-9561-4355-a81b-a14441a2940f-serving-cert\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.895871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-client-ca\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.895949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-client-ca\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.895975 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-proxy-ca-bundles\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.895990 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-config\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.896018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-config\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.896036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67d890-3131-454a-bd9c-89b8212c8842-serving-cert\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.896070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27w4\" (UniqueName: \"kubernetes.io/projected/c275e157-9561-4355-a81b-a14441a2940f-kube-api-access-p27w4\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.896087 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5745q\" (UniqueName: \"kubernetes.io/projected/4e67d890-3131-454a-bd9c-89b8212c8842-kube-api-access-5745q\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.897135 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-client-ca\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.897530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-client-ca\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.898649 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-config\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.898648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-config\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.898738 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-proxy-ca-bundles\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.903677 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c275e157-9561-4355-a81b-a14441a2940f-serving-cert\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.904215 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67d890-3131-454a-bd9c-89b8212c8842-serving-cert\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.914378 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27w4\" (UniqueName: \"kubernetes.io/projected/c275e157-9561-4355-a81b-a14441a2940f-kube-api-access-p27w4\") pod \"route-controller-manager-7dbf96fbfc-frrqz\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:07 crc kubenswrapper[4832]: I0312 14:52:07.915440 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5745q\" (UniqueName: \"kubernetes.io/projected/4e67d890-3131-454a-bd9c-89b8212c8842-kube-api-access-5745q\") pod \"controller-manager-7cdb5dc6-gjtgw\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.029899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.047710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.070230 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9n2m2"] Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.104070 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.104143 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.149681 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.451702 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz"] Mar 12 14:52:08 crc kubenswrapper[4832]: W0312 14:52:08.531217 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e67d890_3131_454a_bd9c_89b8212c8842.slice/crio-bee30a2fd3f5d8772941403cad733c7295a0114ca90c0acd4a3793de3a70c1ed WatchSource:0}: Error finding container bee30a2fd3f5d8772941403cad733c7295a0114ca90c0acd4a3793de3a70c1ed: Status 404 returned error can't find the container with id bee30a2fd3f5d8772941403cad733c7295a0114ca90c0acd4a3793de3a70c1ed Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.533150 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw"] Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.628413 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142c1439-9756-40be-8305-1537d7a59ef4" path="/var/lib/kubelet/pods/142c1439-9756-40be-8305-1537d7a59ef4/volumes" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.629414 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a890b66d-31e8-4219-84e9-72f40af3df39" path="/var/lib/kubelet/pods/a890b66d-31e8-4219-84e9-72f40af3df39/volumes" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.820032 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" event={"ID":"4e67d890-3131-454a-bd9c-89b8212c8842","Type":"ContainerStarted","Data":"de9c4f90ddbe052cfd4cd05767294661898acd498a8fe454fe1ab8bb9953e880"} Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.820079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" event={"ID":"4e67d890-3131-454a-bd9c-89b8212c8842","Type":"ContainerStarted","Data":"bee30a2fd3f5d8772941403cad733c7295a0114ca90c0acd4a3793de3a70c1ed"} Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.820788 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.822175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" event={"ID":"c275e157-9561-4355-a81b-a14441a2940f","Type":"ContainerStarted","Data":"10328ed23acbd83a0c646616f06bb6d6c9c4d23c85f9d3a04231964c145c48c0"} Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.822235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" event={"ID":"c275e157-9561-4355-a81b-a14441a2940f","Type":"ContainerStarted","Data":"b2b224f980f9f5dde2650f6247557fac15d141e22f99ced1de999a98f4aa8907"} Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.822380 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.824127 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9n2m2" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="registry-server" containerID="cri-o://88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965" gracePeriod=2 Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.824155 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gc4cj" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="registry-server" containerID="cri-o://29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e" gracePeriod=2 Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.827171 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.840385 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" podStartSLOduration=3.840363657 podStartE2EDuration="3.840363657s" podCreationTimestamp="2026-03-12 14:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:08.837453364 +0000 UTC m=+287.481467590" watchObservedRunningTime="2026-03-12 14:52:08.840363657 +0000 UTC m=+287.484377883" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.853546 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" podStartSLOduration=3.853529133 podStartE2EDuration="3.853529133s" podCreationTimestamp="2026-03-12 14:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:08.851236527 +0000 UTC m=+287.495250773" watchObservedRunningTime="2026-03-12 14:52:08.853529133 +0000 UTC m=+287.497543359" Mar 12 14:52:08 crc kubenswrapper[4832]: I0312 14:52:08.868447 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.055939 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.143003 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.143044 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.206902 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.301043 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.308185 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.312837 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-utilities\") pod \"bb45efb5-4239-4b47-9664-12fd61be0894\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.312880 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-utilities\") pod \"28ad10d5-8a9a-418b-af56-da46474279fe\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.312901 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-catalog-content\") pod \"28ad10d5-8a9a-418b-af56-da46474279fe\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.312943 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-catalog-content\") pod \"bb45efb5-4239-4b47-9664-12fd61be0894\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.312980 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxrbm\" (UniqueName: \"kubernetes.io/projected/bb45efb5-4239-4b47-9664-12fd61be0894-kube-api-access-nxrbm\") pod \"bb45efb5-4239-4b47-9664-12fd61be0894\" (UID: \"bb45efb5-4239-4b47-9664-12fd61be0894\") " Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.313004 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-727pv\" (UniqueName: \"kubernetes.io/projected/28ad10d5-8a9a-418b-af56-da46474279fe-kube-api-access-727pv\") pod \"28ad10d5-8a9a-418b-af56-da46474279fe\" (UID: \"28ad10d5-8a9a-418b-af56-da46474279fe\") " Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.313580 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-utilities" (OuterVolumeSpecName: "utilities") pod "bb45efb5-4239-4b47-9664-12fd61be0894" (UID: "bb45efb5-4239-4b47-9664-12fd61be0894"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.313759 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-utilities" (OuterVolumeSpecName: "utilities") pod "28ad10d5-8a9a-418b-af56-da46474279fe" (UID: "28ad10d5-8a9a-418b-af56-da46474279fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.332191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ad10d5-8a9a-418b-af56-da46474279fe-kube-api-access-727pv" (OuterVolumeSpecName: "kube-api-access-727pv") pod "28ad10d5-8a9a-418b-af56-da46474279fe" (UID: "28ad10d5-8a9a-418b-af56-da46474279fe"). InnerVolumeSpecName "kube-api-access-727pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.332248 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb45efb5-4239-4b47-9664-12fd61be0894-kube-api-access-nxrbm" (OuterVolumeSpecName: "kube-api-access-nxrbm") pod "bb45efb5-4239-4b47-9664-12fd61be0894" (UID: "bb45efb5-4239-4b47-9664-12fd61be0894"). InnerVolumeSpecName "kube-api-access-nxrbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.373527 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb45efb5-4239-4b47-9664-12fd61be0894" (UID: "bb45efb5-4239-4b47-9664-12fd61be0894"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.377588 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28ad10d5-8a9a-418b-af56-da46474279fe" (UID: "28ad10d5-8a9a-418b-af56-da46474279fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.414491 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.414540 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.414550 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad10d5-8a9a-418b-af56-da46474279fe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.414560 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb45efb5-4239-4b47-9664-12fd61be0894-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.414569 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxrbm\" (UniqueName: \"kubernetes.io/projected/bb45efb5-4239-4b47-9664-12fd61be0894-kube-api-access-nxrbm\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.414577 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-727pv\" (UniqueName: \"kubernetes.io/projected/28ad10d5-8a9a-418b-af56-da46474279fe-kube-api-access-727pv\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.830571 4832 generic.go:334] "Generic (PLEG): container finished" podID="bb45efb5-4239-4b47-9664-12fd61be0894" containerID="88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965" exitCode=0 Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.830700 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9n2m2" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.830844 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n2m2" event={"ID":"bb45efb5-4239-4b47-9664-12fd61be0894","Type":"ContainerDied","Data":"88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965"} Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.830902 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n2m2" event={"ID":"bb45efb5-4239-4b47-9664-12fd61be0894","Type":"ContainerDied","Data":"6fe0d2269637224e70e070ce9dcc62c0f564837710769f6268fcab2341c82dee"} Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.830931 4832 scope.go:117] "RemoveContainer" containerID="88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.833714 4832 generic.go:334] "Generic (PLEG): container finished" podID="28ad10d5-8a9a-418b-af56-da46474279fe" containerID="29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e" exitCode=0 Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.834923 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc4cj" event={"ID":"28ad10d5-8a9a-418b-af56-da46474279fe","Type":"ContainerDied","Data":"29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e"} Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.834969 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc4cj" event={"ID":"28ad10d5-8a9a-418b-af56-da46474279fe","Type":"ContainerDied","Data":"dfa88c9782bd2724616b0d2c812156bae3f648bae7348eb99abe70548fb352bf"} Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.835446 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc4cj" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.848886 4832 scope.go:117] "RemoveContainer" containerID="6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.863940 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9n2m2"] Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.872476 4832 scope.go:117] "RemoveContainer" containerID="7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.875649 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9n2m2"] Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.879109 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gc4cj"] Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.882255 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gc4cj"] Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.888940 4832 scope.go:117] "RemoveContainer" containerID="88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965" Mar 12 14:52:09 crc kubenswrapper[4832]: E0312 14:52:09.889401 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965\": container with ID starting with 88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965 not found: ID does not exist" containerID="88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.889446 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965"} err="failed to get container status \"88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965\": rpc error: code = NotFound desc = could not find container \"88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965\": container with ID starting with 88b3b9f01f80dd454d93a316cd7ec969cc55677262c3ccfd56f2635ad18ef965 not found: ID does not exist" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.889474 4832 scope.go:117] "RemoveContainer" containerID="6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6" Mar 12 14:52:09 crc kubenswrapper[4832]: E0312 14:52:09.889980 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6\": container with ID starting with 6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6 not found: ID does not exist" containerID="6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.890060 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6"} err="failed to get container status \"6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6\": rpc error: code = NotFound desc = could not find container \"6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6\": container with ID starting with 6ed08ab1c84c2c41a6808dc7797848628876570a82bbe6e67c16c7934ae90ee6 not found: ID does not exist" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.890087 4832 scope.go:117] "RemoveContainer" containerID="7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572" Mar 12 14:52:09 crc kubenswrapper[4832]: E0312 14:52:09.890600 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572\": container with ID starting with 7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572 not found: ID does not exist" containerID="7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.890631 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572"} err="failed to get container status \"7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572\": rpc error: code = NotFound desc = could not find container \"7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572\": container with ID starting with 7e757a9951e383aba498f4b4ea5a53ab01951f6a68f9254f65eaa8b45286f572 not found: ID does not exist" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.890650 4832 scope.go:117] "RemoveContainer" containerID="29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.893459 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.909181 4832 scope.go:117] "RemoveContainer" containerID="9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.953269 4832 scope.go:117] "RemoveContainer" containerID="5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.978794 4832 scope.go:117] "RemoveContainer" containerID="29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e" Mar 12 14:52:09 crc kubenswrapper[4832]: E0312 14:52:09.979319 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e\": container with ID starting with 29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e not found: ID does not exist" containerID="29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.979361 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e"} err="failed to get container status \"29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e\": rpc error: code = NotFound desc = could not find container \"29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e\": container with ID starting with 29c9df21c81333624a056e8ee5b6617c5ae7d2b1b150f829d606c2aedf58720e not found: ID does not exist" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.979389 4832 scope.go:117] "RemoveContainer" containerID="9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55" Mar 12 14:52:09 crc kubenswrapper[4832]: E0312 14:52:09.979745 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55\": container with ID starting with 9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55 not found: ID does not exist" containerID="9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.979794 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55"} err="failed to get container status \"9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55\": rpc error: code = NotFound desc = could not find container \"9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55\": container with ID starting with 9b1baa8d0bdee10b45069075104faee876e9a4c16f9c7bca82e7796f488a2b55 not found: ID does not exist" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.979814 4832 scope.go:117] "RemoveContainer" containerID="5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904" Mar 12 14:52:09 crc kubenswrapper[4832]: E0312 14:52:09.980030 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904\": container with ID starting with 5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904 not found: ID does not exist" containerID="5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904" Mar 12 14:52:09 crc kubenswrapper[4832]: I0312 14:52:09.980059 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904"} err="failed to get container status \"5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904\": rpc error: code = NotFound desc = could not find container \"5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904\": container with ID starting with 5cea94da7ef39901353e4b62667213272b28ad255141df01b6b81b061023a904 not found: ID does not exist" Mar 12 14:52:10 crc kubenswrapper[4832]: I0312 14:52:10.269818 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjrmj"] Mar 12 14:52:10 crc kubenswrapper[4832]: I0312 14:52:10.630073 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" path="/var/lib/kubelet/pods/28ad10d5-8a9a-418b-af56-da46474279fe/volumes" Mar 12 14:52:10 crc kubenswrapper[4832]: I0312 14:52:10.631030 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" path="/var/lib/kubelet/pods/bb45efb5-4239-4b47-9664-12fd61be0894/volumes" Mar 12 14:52:10 crc kubenswrapper[4832]: I0312 14:52:10.842899 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zjrmj" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="registry-server" containerID="cri-o://317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c" gracePeriod=2 Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.224384 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.242773 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-utilities\") pod \"67ae4e40-af35-414c-8be7-4f9776319561\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.242859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txw2k\" (UniqueName: \"kubernetes.io/projected/67ae4e40-af35-414c-8be7-4f9776319561-kube-api-access-txw2k\") pod \"67ae4e40-af35-414c-8be7-4f9776319561\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.242954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-catalog-content\") pod \"67ae4e40-af35-414c-8be7-4f9776319561\" (UID: \"67ae4e40-af35-414c-8be7-4f9776319561\") " Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.244493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-utilities" (OuterVolumeSpecName: "utilities") pod "67ae4e40-af35-414c-8be7-4f9776319561" (UID: "67ae4e40-af35-414c-8be7-4f9776319561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.248210 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ae4e40-af35-414c-8be7-4f9776319561-kube-api-access-txw2k" (OuterVolumeSpecName: "kube-api-access-txw2k") pod "67ae4e40-af35-414c-8be7-4f9776319561" (UID: "67ae4e40-af35-414c-8be7-4f9776319561"). InnerVolumeSpecName "kube-api-access-txw2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.267496 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ae4e40-af35-414c-8be7-4f9776319561" (UID: "67ae4e40-af35-414c-8be7-4f9776319561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.344204 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txw2k\" (UniqueName: \"kubernetes.io/projected/67ae4e40-af35-414c-8be7-4f9776319561-kube-api-access-txw2k\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.344241 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.344252 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae4e40-af35-414c-8be7-4f9776319561-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.852863 4832 generic.go:334] "Generic (PLEG): container finished" podID="67ae4e40-af35-414c-8be7-4f9776319561" containerID="317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c" exitCode=0 Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.852916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjrmj" event={"ID":"67ae4e40-af35-414c-8be7-4f9776319561","Type":"ContainerDied","Data":"317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c"} Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.852955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjrmj" event={"ID":"67ae4e40-af35-414c-8be7-4f9776319561","Type":"ContainerDied","Data":"a0750ed1a5b65711d3e4528f95094fc118f99f1acc16712b4a0363354d85307b"} Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.852976 4832 scope.go:117] "RemoveContainer" containerID="317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.853110 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjrmj" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.871990 4832 scope.go:117] "RemoveContainer" containerID="2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.889642 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjrmj"] Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.890482 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjrmj"] Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.911982 4832 scope.go:117] "RemoveContainer" containerID="5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.932886 4832 scope.go:117] "RemoveContainer" containerID="317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c" Mar 12 14:52:11 crc kubenswrapper[4832]: E0312 14:52:11.933319 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c\": container with ID starting with 317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c not found: ID does not exist" containerID="317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.933358 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c"} err="failed to get container status \"317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c\": rpc error: code = NotFound desc = could not find container \"317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c\": container with ID starting with 317542610c41c9767c54457a45a6f71433b95e0823f7ed5eaf3c2b23fd4a5e3c not found: ID does not exist" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.933384 4832 scope.go:117] "RemoveContainer" containerID="2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd" Mar 12 14:52:11 crc kubenswrapper[4832]: E0312 14:52:11.933826 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd\": container with ID starting with 2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd not found: ID does not exist" containerID="2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.933855 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd"} err="failed to get container status \"2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd\": rpc error: code = NotFound desc = could not find container \"2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd\": container with ID starting with 2e4bcdbac1ed9dbbe034fd46694dd1fbbd8bafc708c9b269a0c4b3a610f00cbd not found: ID does not exist" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.933893 4832 scope.go:117] "RemoveContainer" containerID="5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942" Mar 12 14:52:11 crc kubenswrapper[4832]: E0312 14:52:11.934273 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942\": container with ID starting with 5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942 not found: ID does not exist" containerID="5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942" Mar 12 14:52:11 crc kubenswrapper[4832]: I0312 14:52:11.934317 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942"} err="failed to get container status \"5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942\": rpc error: code = NotFound desc = could not find container \"5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942\": container with ID starting with 5c9de9b3c0cd7df6d7676c001fd6fe9f906fc03c6d920308db72b6cc69700942 not found: ID does not exist" Mar 12 14:52:12 crc kubenswrapper[4832]: I0312 14:52:12.625892 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ae4e40-af35-414c-8be7-4f9776319561" path="/var/lib/kubelet/pods/67ae4e40-af35-414c-8be7-4f9776319561/volumes" Mar 12 14:52:12 crc kubenswrapper[4832]: I0312 14:52:12.666271 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwqfd"] Mar 12 14:52:12 crc kubenswrapper[4832]: I0312 14:52:12.666462 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fwqfd" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="registry-server" containerID="cri-o://9924cb24e4aead9594b1538e4d5293977102a815273ed377c0a3dddf702eb957" gracePeriod=2 Mar 12 14:52:12 crc kubenswrapper[4832]: I0312 14:52:12.861921 4832 generic.go:334] "Generic (PLEG): container finished" podID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerID="9924cb24e4aead9594b1538e4d5293977102a815273ed377c0a3dddf702eb957" exitCode=0 Mar 12 14:52:12 crc kubenswrapper[4832]: I0312 14:52:12.862004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwqfd" event={"ID":"584742b5-4cf7-4fcf-8b62-ad79df0bc737","Type":"ContainerDied","Data":"9924cb24e4aead9594b1538e4d5293977102a815273ed377c0a3dddf702eb957"} Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.081865 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.266617 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlr5c\" (UniqueName: \"kubernetes.io/projected/584742b5-4cf7-4fcf-8b62-ad79df0bc737-kube-api-access-nlr5c\") pod \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.266666 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-catalog-content\") pod \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.266716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-utilities\") pod \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\" (UID: \"584742b5-4cf7-4fcf-8b62-ad79df0bc737\") " Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.270735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-utilities" (OuterVolumeSpecName: "utilities") pod "584742b5-4cf7-4fcf-8b62-ad79df0bc737" (UID: "584742b5-4cf7-4fcf-8b62-ad79df0bc737"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.278356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584742b5-4cf7-4fcf-8b62-ad79df0bc737-kube-api-access-nlr5c" (OuterVolumeSpecName: "kube-api-access-nlr5c") pod "584742b5-4cf7-4fcf-8b62-ad79df0bc737" (UID: "584742b5-4cf7-4fcf-8b62-ad79df0bc737"). InnerVolumeSpecName "kube-api-access-nlr5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.367869 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.367896 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlr5c\" (UniqueName: \"kubernetes.io/projected/584742b5-4cf7-4fcf-8b62-ad79df0bc737-kube-api-access-nlr5c\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.393615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584742b5-4cf7-4fcf-8b62-ad79df0bc737" (UID: "584742b5-4cf7-4fcf-8b62-ad79df0bc737"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.469492 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584742b5-4cf7-4fcf-8b62-ad79df0bc737-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.870551 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwqfd" event={"ID":"584742b5-4cf7-4fcf-8b62-ad79df0bc737","Type":"ContainerDied","Data":"7e3d26b9dd47c2f926bae34c978bf0adcacff8d99af12d74560bc5efc5409354"} Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.870619 4832 scope.go:117] "RemoveContainer" containerID="9924cb24e4aead9594b1538e4d5293977102a815273ed377c0a3dddf702eb957" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.870765 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwqfd" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.891608 4832 scope.go:117] "RemoveContainer" containerID="328f1e96242f470743c58dbfcf9495770a5deb833c060f0ae1c3080f8736b0cf" Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.913202 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwqfd"] Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.916933 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fwqfd"] Mar 12 14:52:13 crc kubenswrapper[4832]: I0312 14:52:13.927469 4832 scope.go:117] "RemoveContainer" containerID="31b888b959f86a2a4c60a7cd9ffd5a6324ee2c514a034b75a31d3fb61be1fbd6" Mar 12 14:52:14 crc kubenswrapper[4832]: I0312 14:52:14.628152 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" path="/var/lib/kubelet/pods/584742b5-4cf7-4fcf-8b62-ad79df0bc737/volumes" Mar 12 14:52:18 crc kubenswrapper[4832]: I0312 14:52:18.017084 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lgx9r"] Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.346393 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347059 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347076 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347092 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347101 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347136 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347146 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347160 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347169 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347179 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347187 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347223 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347232 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347244 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347252 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347262 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347290 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347300 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347308 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="extract-content" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347319 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347331 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347372 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347384 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.347397 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347411 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="extract-utilities" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347658 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb45efb5-4239-4b47-9664-12fd61be0894" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347711 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ae4e40-af35-414c-8be7-4f9776319561" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347721 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ad10d5-8a9a-418b-af56-da46474279fe" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.347735 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="584742b5-4cf7-4fcf-8b62-ad79df0bc737" containerName="registry-server" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.348229 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.348668 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c" gracePeriod=15 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.348919 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.349112 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686" gracePeriod=15 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.349159 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3" gracePeriod=15 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.349227 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e" gracePeriod=15 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.349359 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c" gracePeriod=15 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.349899 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350163 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350178 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350189 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350197 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350211 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350224 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350239 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350249 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350268 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350278 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350292 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350313 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350327 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350338 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350349 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350360 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350379 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350387 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350543 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350561 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350575 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350590 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350604 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350622 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350633 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.350786 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350801 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350939 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.350957 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.380031 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.417802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.417884 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.417945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.417974 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.418015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.418053 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.418111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.418139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519350 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519477 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519466 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519600 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519651 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519672 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519690 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.519899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.680878 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:24 crc kubenswrapper[4832]: W0312 14:52:24.712778 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-45908b4dd85f03b7339393a5192391d7d6db45ff704f7ae8a95c1b0eac9157e1 WatchSource:0}: Error finding container 45908b4dd85f03b7339393a5192391d7d6db45ff704f7ae8a95c1b0eac9157e1: Status 404 returned error can't find the container with id 45908b4dd85f03b7339393a5192391d7d6db45ff704f7ae8a95c1b0eac9157e1 Mar 12 14:52:24 crc kubenswrapper[4832]: E0312 14:52:24.716769 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c1fa4835d53e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:52:24.716162017 +0000 UTC m=+303.360176263,LastTimestamp:2026-03-12 14:52:24.716162017 +0000 UTC m=+303.360176263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.937917 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.940090 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.941072 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686" exitCode=0 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.941157 4832 scope.go:117] "RemoveContainer" containerID="628b36a18cd463438a44b325119e602c73c911e9890bda75ac30ce2f7c84fa11" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.941196 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c" exitCode=0 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.941281 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3" exitCode=0 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.941297 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e" exitCode=2 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.942872 4832 generic.go:334] "Generic (PLEG): container finished" podID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" containerID="65bddd1c9cc168b31ba8288bb95776b2ef10c9b54db4dc48a5cf0fc09d13f549" exitCode=0 Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.942899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d","Type":"ContainerDied","Data":"65bddd1c9cc168b31ba8288bb95776b2ef10c9b54db4dc48a5cf0fc09d13f549"} Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.943678 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:24 crc kubenswrapper[4832]: I0312 14:52:24.944093 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"45908b4dd85f03b7339393a5192391d7d6db45ff704f7ae8a95c1b0eac9157e1"} Mar 12 14:52:25 crc kubenswrapper[4832]: I0312 14:52:25.951871 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae"} Mar 12 14:52:25 crc kubenswrapper[4832]: I0312 14:52:25.953031 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:25 crc kubenswrapper[4832]: E0312 14:52:25.953376 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:25 crc kubenswrapper[4832]: I0312 14:52:25.955702 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.309806 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.311103 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.444178 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-var-lock\") pod \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.444270 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kubelet-dir\") pod \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.444351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kube-api-access\") pod \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\" (UID: \"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d\") " Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.444547 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-var-lock" (OuterVolumeSpecName: "var-lock") pod "8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" (UID: "8df0cbbc-c142-4d08-ad20-82ef1be6ce5d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.444586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" (UID: "8df0cbbc-c142-4d08-ad20-82ef1be6ce5d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.444737 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.444754 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.469986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" (UID: "8df0cbbc-c142-4d08-ad20-82ef1be6ce5d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.545551 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df0cbbc-c142-4d08-ad20-82ef1be6ce5d-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.969917 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.970968 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c" exitCode=0 Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.973687 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.973692 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df0cbbc-c142-4d08-ad20-82ef1be6ce5d","Type":"ContainerDied","Data":"7ebac6c89d18b04b8f21a00b3d6462b8a537fe63fc930ec6dcc5266e1dafe2a7"} Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.973732 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebac6c89d18b04b8f21a00b3d6462b8a537fe63fc930ec6dcc5266e1dafe2a7" Mar 12 14:52:26 crc kubenswrapper[4832]: E0312 14:52:26.974697 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:52:26 crc kubenswrapper[4832]: I0312 14:52:26.978020 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.135486 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.136803 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.137705 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.138081 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.256552 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.256734 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.256794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.256847 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.256833 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.256947 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.257322 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.257348 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.257365 4832 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.983220 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.985236 4832 scope.go:117] "RemoveContainer" containerID="b8058e6536cfd591c8a4e88d5007d0a72ed0b92926d01f5909915aaab440a686" Mar 12 14:52:27 crc kubenswrapper[4832]: I0312 14:52:27.985452 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.003051 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.003579 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.009342 4832 scope.go:117] "RemoveContainer" containerID="e2b83401eb6aee672a6f1ff952166ddb1c1445337f672d6fed01a957d4876b0c" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.024111 4832 scope.go:117] "RemoveContainer" containerID="ac6878f3023c14b5e0219c43db448d357cd9adae3325de539310ce6f18573ee3" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.037434 4832 scope.go:117] "RemoveContainer" containerID="723a3aad6edee1c956ac6cb2be559a35579f17a2090773877765a15ad853e13e" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.054091 4832 scope.go:117] "RemoveContainer" containerID="17231a1c170ba9acb7670ba599aa169ee8adaa09757df803774dfb78ac675b6c" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.068474 4832 scope.go:117] "RemoveContainer" containerID="32ddb0e3e8dfce8c9d3b67712e02d9233b728f2296181a53acbbf5195bfb30a5" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.448824 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.449774 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.450371 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.450920 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.451459 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.451545 4832 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.451973 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Mar 12 14:52:28 crc kubenswrapper[4832]: I0312 14:52:28.629782 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.653584 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.692314 4832 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" volumeName="registry-storage" Mar 12 14:52:28 crc kubenswrapper[4832]: E0312 14:52:28.921261 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c1fa4835d53e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 14:52:24.716162017 +0000 UTC m=+303.360176263,LastTimestamp:2026-03-12 14:52:24.716162017 +0000 UTC m=+303.360176263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 14:52:29 crc kubenswrapper[4832]: E0312 14:52:29.054790 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Mar 12 14:52:29 crc kubenswrapper[4832]: E0312 14:52:29.855255 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Mar 12 14:52:31 crc kubenswrapper[4832]: E0312 14:52:31.457057 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Mar 12 14:52:32 crc kubenswrapper[4832]: I0312 14:52:32.623736 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:34 crc kubenswrapper[4832]: E0312 14:52:34.658667 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="6.4s" Mar 12 14:52:36 crc kubenswrapper[4832]: I0312 14:52:36.619408 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:36 crc kubenswrapper[4832]: I0312 14:52:36.621614 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:36 crc kubenswrapper[4832]: I0312 14:52:36.637573 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:36 crc kubenswrapper[4832]: I0312 14:52:36.637622 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:36 crc kubenswrapper[4832]: E0312 14:52:36.638246 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:36 crc kubenswrapper[4832]: I0312 14:52:36.639006 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:36 crc kubenswrapper[4832]: W0312 14:52:36.659485 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6fd2c2a18ba0fe1bf5d8bbdbba817f22efd86191d60d644073853f019f241c60 WatchSource:0}: Error finding container 6fd2c2a18ba0fe1bf5d8bbdbba817f22efd86191d60d644073853f019f241c60: Status 404 returned error can't find the container with id 6fd2c2a18ba0fe1bf5d8bbdbba817f22efd86191d60d644073853f019f241c60 Mar 12 14:52:37 crc kubenswrapper[4832]: I0312 14:52:37.045859 4832 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f8c90365c522d6de699e19ea7d7c6cfd6c644d2a490d5c5001efe47431440352" exitCode=0 Mar 12 14:52:37 crc kubenswrapper[4832]: I0312 14:52:37.045937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f8c90365c522d6de699e19ea7d7c6cfd6c644d2a490d5c5001efe47431440352"} Mar 12 14:52:37 crc kubenswrapper[4832]: I0312 14:52:37.045997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6fd2c2a18ba0fe1bf5d8bbdbba817f22efd86191d60d644073853f019f241c60"} Mar 12 14:52:37 crc kubenswrapper[4832]: I0312 14:52:37.046588 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:37 crc kubenswrapper[4832]: I0312 14:52:37.046632 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:37 crc kubenswrapper[4832]: I0312 14:52:37.047251 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Mar 12 14:52:37 crc kubenswrapper[4832]: E0312 14:52:37.047472 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.060066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b2869e474741d862e293ae76c9fa638934cf77346df42ac07681627b44836e2b"} Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.060485 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f84f2b188a9e1e2a2104e3a99cf23c1e253c1cac62d6d0e4d0452f47e9d2e28"} Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.060528 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"391b67477cc7a74a83c802d373e64adbc16dac328a38e9938de4f7c31d7eb41b"} Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.060544 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9f4ef0538ba3e2e3789742fb98cba90201493f1b5a95345e0e40d339d13d7026"} Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.067296 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.067926 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.067974 4832 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d" exitCode=1 Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.068002 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d"} Mar 12 14:52:38 crc kubenswrapper[4832]: I0312 14:52:38.068425 4832 scope.go:117] "RemoveContainer" containerID="9276e887ebf6a570e0c7707f87257a4d155c33e59d354ab45ab02c9e1d03598d" Mar 12 14:52:39 crc kubenswrapper[4832]: I0312 14:52:39.078893 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 14:52:39 crc kubenswrapper[4832]: I0312 14:52:39.079848 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 14:52:39 crc kubenswrapper[4832]: I0312 14:52:39.079922 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cbd5ba4e9efd562f9fd08aec6461e1d27eada143d57b4965a84409014e777524"} Mar 12 14:52:39 crc kubenswrapper[4832]: I0312 14:52:39.083777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78757e25e2df2b0cd9e0ade9698086a6ce6782802e1e1578cb47d1aea8cc1519"} Mar 12 14:52:39 crc kubenswrapper[4832]: I0312 14:52:39.084084 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:39 crc kubenswrapper[4832]: I0312 14:52:39.084154 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:39 crc kubenswrapper[4832]: I0312 14:52:39.084171 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:41 crc kubenswrapper[4832]: I0312 14:52:41.640038 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:41 crc kubenswrapper[4832]: I0312 14:52:41.640111 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:41 crc kubenswrapper[4832]: I0312 14:52:41.648444 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:41 crc kubenswrapper[4832]: I0312 14:52:41.689166 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.043942 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" podUID="0e48a27d-76e1-45f3-87af-c9b306291d25" containerName="oauth-openshift" containerID="cri-o://3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d" gracePeriod=15 Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.505878 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694391 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl4qb\" (UniqueName: \"kubernetes.io/projected/0e48a27d-76e1-45f3-87af-c9b306291d25-kube-api-access-zl4qb\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694457 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-service-ca\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-session\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-error\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-policies\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694837 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-cliconfig\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694899 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-trusted-ca-bundle\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694946 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-login\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.694983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-router-certs\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695003 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-dir\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695041 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-idp-0-file-data\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695065 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-ocp-branding-template\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695124 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695430 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695457 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695638 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695752 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-serving-cert\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.695829 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-provider-selection\") pod \"0e48a27d-76e1-45f3-87af-c9b306291d25\" (UID: \"0e48a27d-76e1-45f3-87af-c9b306291d25\") " Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.696275 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.696298 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.696311 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.696333 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.696346 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.700141 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.700327 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.701302 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.701832 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.702893 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.702972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e48a27d-76e1-45f3-87af-c9b306291d25-kube-api-access-zl4qb" (OuterVolumeSpecName: "kube-api-access-zl4qb") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "kube-api-access-zl4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.703195 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.703547 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.704794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0e48a27d-76e1-45f3-87af-c9b306291d25" (UID: "0e48a27d-76e1-45f3-87af-c9b306291d25"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797173 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797227 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797249 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl4qb\" (UniqueName: \"kubernetes.io/projected/0e48a27d-76e1-45f3-87af-c9b306291d25-kube-api-access-zl4qb\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797268 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797286 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797306 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797324 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797343 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:43 crc kubenswrapper[4832]: I0312 14:52:43.797360 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e48a27d-76e1-45f3-87af-c9b306291d25-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.091744 4832 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.112063 4832 generic.go:334] "Generic (PLEG): container finished" podID="0e48a27d-76e1-45f3-87af-c9b306291d25" containerID="3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d" exitCode=0 Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.112427 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.112555 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.112463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" event={"ID":"0e48a27d-76e1-45f3-87af-c9b306291d25","Type":"ContainerDied","Data":"3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d"} Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.112673 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" event={"ID":"0e48a27d-76e1-45f3-87af-c9b306291d25","Type":"ContainerDied","Data":"1a96a82daeb1bad3884445fc4d15ae934710e9ff893cb37f8c76033dee3d3eaa"} Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.112711 4832 scope.go:117] "RemoveContainer" containerID="3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.112846 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lgx9r" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.116270 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.118596 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4e94127c-b5f8-4efd-aeda-72465cdddc14" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.124194 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.124324 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.124357 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.129416 4832 scope.go:117] "RemoveContainer" containerID="3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d" Mar 12 14:52:44 crc kubenswrapper[4832]: E0312 14:52:44.129869 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d\": container with ID starting with 3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d not found: ID does not exist" containerID="3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d" Mar 12 14:52:44 crc kubenswrapper[4832]: I0312 14:52:44.130012 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d"} err="failed to get container status \"3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d\": rpc error: code = NotFound desc = could not find container \"3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d\": container with ID starting with 3d758abe05f734853633aff229be4c2b821573b43b71d4c6bd254ae404cfca2d not found: ID does not exist" Mar 12 14:52:45 crc kubenswrapper[4832]: I0312 14:52:45.119788 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:45 crc kubenswrapper[4832]: I0312 14:52:45.120068 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58e67bad-c4c5-4b0f-a538-a3a5c72a6902" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.767086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.767678 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.767809 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.767923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.770541 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.770548 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.770690 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.780235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.780643 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.790625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.794696 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.797341 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:52:48 crc kubenswrapper[4832]: I0312 14:52:48.996259 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 14:52:49 crc kubenswrapper[4832]: I0312 14:52:49.045367 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:52:49 crc kubenswrapper[4832]: I0312 14:52:49.060441 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 14:52:49 crc kubenswrapper[4832]: W0312 14:52:49.415230 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-ea06124fb8f6e59fc2083e2a740a444b9c1a666e6416e4a97165772be0865a56 WatchSource:0}: Error finding container ea06124fb8f6e59fc2083e2a740a444b9c1a666e6416e4a97165772be0865a56: Status 404 returned error can't find the container with id ea06124fb8f6e59fc2083e2a740a444b9c1a666e6416e4a97165772be0865a56 Mar 12 14:52:49 crc kubenswrapper[4832]: W0312 14:52:49.484841 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-bb7e09e2af3413324259e39be59d094afa2868c689a480f86b910b3b8726a718 WatchSource:0}: Error finding container bb7e09e2af3413324259e39be59d094afa2868c689a480f86b910b3b8726a718: Status 404 returned error can't find the container with id bb7e09e2af3413324259e39be59d094afa2868c689a480f86b910b3b8726a718 Mar 12 14:52:49 crc kubenswrapper[4832]: W0312 14:52:49.535270 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d27296f7cd59ccb088f15a25aa86cad2f3366e8fee171c2c0a90b60fcdd161b2 WatchSource:0}: Error finding container d27296f7cd59ccb088f15a25aa86cad2f3366e8fee171c2c0a90b60fcdd161b2: Status 404 returned error can't find the container with id d27296f7cd59ccb088f15a25aa86cad2f3366e8fee171c2c0a90b60fcdd161b2 Mar 12 14:52:50 crc kubenswrapper[4832]: I0312 14:52:50.163231 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5727a782b25cb113239cf1a529ca3e32a54ff666deda653ce6befec9ce8fd16e"} Mar 12 14:52:50 crc kubenswrapper[4832]: I0312 14:52:50.163304 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ea06124fb8f6e59fc2083e2a740a444b9c1a666e6416e4a97165772be0865a56"} Mar 12 14:52:50 crc kubenswrapper[4832]: I0312 14:52:50.166836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0eb0b6f5f37581d12ee9698ca51eed72f0652143174e4815cc4c6570648cedef"} Mar 12 14:52:50 crc kubenswrapper[4832]: I0312 14:52:50.166884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d27296f7cd59ccb088f15a25aa86cad2f3366e8fee171c2c0a90b60fcdd161b2"} Mar 12 14:52:50 crc kubenswrapper[4832]: I0312 14:52:50.169858 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6d9b30ce8e00c22a2d1094a21048ae8d11d4bf05b5cccf4d7396247144817bca"} Mar 12 14:52:50 crc kubenswrapper[4832]: I0312 14:52:50.169901 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bb7e09e2af3413324259e39be59d094afa2868c689a480f86b910b3b8726a718"} Mar 12 14:52:50 crc kubenswrapper[4832]: I0312 14:52:50.170494 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:52:51 crc kubenswrapper[4832]: I0312 14:52:51.176343 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 12 14:52:51 crc kubenswrapper[4832]: I0312 14:52:51.176663 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="0eb0b6f5f37581d12ee9698ca51eed72f0652143174e4815cc4c6570648cedef" exitCode=255 Mar 12 14:52:51 crc kubenswrapper[4832]: I0312 14:52:51.176716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"0eb0b6f5f37581d12ee9698ca51eed72f0652143174e4815cc4c6570648cedef"} Mar 12 14:52:51 crc kubenswrapper[4832]: I0312 14:52:51.177416 4832 scope.go:117] "RemoveContainer" containerID="0eb0b6f5f37581d12ee9698ca51eed72f0652143174e4815cc4c6570648cedef" Mar 12 14:52:52 crc kubenswrapper[4832]: I0312 14:52:52.192015 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 12 14:52:52 crc kubenswrapper[4832]: I0312 14:52:52.192404 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 12 14:52:52 crc kubenswrapper[4832]: I0312 14:52:52.192450 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="6d78551ee79ce7edebeb61e89f8682a1bade222303474eccc48172443fedde97" exitCode=255 Mar 12 14:52:52 crc kubenswrapper[4832]: I0312 14:52:52.192482 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"6d78551ee79ce7edebeb61e89f8682a1bade222303474eccc48172443fedde97"} Mar 12 14:52:52 crc kubenswrapper[4832]: I0312 14:52:52.192537 4832 scope.go:117] "RemoveContainer" containerID="0eb0b6f5f37581d12ee9698ca51eed72f0652143174e4815cc4c6570648cedef" Mar 12 14:52:52 crc kubenswrapper[4832]: I0312 14:52:52.193281 4832 scope.go:117] "RemoveContainer" containerID="6d78551ee79ce7edebeb61e89f8682a1bade222303474eccc48172443fedde97" Mar 12 14:52:52 crc kubenswrapper[4832]: E0312 14:52:52.193865 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 14:52:52 crc kubenswrapper[4832]: I0312 14:52:52.638938 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4e94127c-b5f8-4efd-aeda-72465cdddc14" Mar 12 14:52:53 crc kubenswrapper[4832]: I0312 14:52:53.199842 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 12 14:52:53 crc kubenswrapper[4832]: I0312 14:52:53.776285 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 14:52:54 crc kubenswrapper[4832]: I0312 14:52:54.079438 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 14:52:54 crc kubenswrapper[4832]: I0312 14:52:54.132778 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:52:54 crc kubenswrapper[4832]: I0312 14:52:54.138057 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 14:52:54 crc kubenswrapper[4832]: I0312 14:52:54.550475 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 14:52:54 crc kubenswrapper[4832]: I0312 14:52:54.605664 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 14:52:54 crc kubenswrapper[4832]: I0312 14:52:54.647617 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 14:52:55 crc kubenswrapper[4832]: I0312 14:52:55.169394 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 14:52:55 crc kubenswrapper[4832]: I0312 14:52:55.429478 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 14:52:55 crc kubenswrapper[4832]: I0312 14:52:55.447844 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 14:52:55 crc kubenswrapper[4832]: I0312 14:52:55.474432 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.268535 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.382428 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.481255 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.698081 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.823460 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.936748 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.952799 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:52:56 crc kubenswrapper[4832]: I0312 14:52:56.989928 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.060248 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.154910 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.302418 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.304126 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.350305 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.362651 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.413125 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.456794 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.484836 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.493788 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.507632 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.770704 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.785668 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.935417 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 14:52:57 crc kubenswrapper[4832]: I0312 14:52:57.951454 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.248711 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.379263 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.430645 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.450815 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.463028 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.615873 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.649387 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.744006 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 14:52:58 crc kubenswrapper[4832]: I0312 14:52:58.892285 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.090055 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.117020 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.186427 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.189666 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.364301 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.366108 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.452661 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.470996 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.648015 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.730928 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.739756 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.763091 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.785923 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.812495 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.912660 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.941158 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.947279 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.961536 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 14:52:59 crc kubenswrapper[4832]: I0312 14:52:59.983814 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.070342 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.205547 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.219434 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.289166 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.309650 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.313430 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.322716 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-lgx9r"] Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.322827 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.331349 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.349788 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.349760959 podStartE2EDuration="16.349760959s" podCreationTimestamp="2026-03-12 14:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:00.348974128 +0000 UTC m=+338.992988404" watchObservedRunningTime="2026-03-12 14:53:00.349760959 +0000 UTC m=+338.993775215" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.429403 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.546096 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.565820 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.633711 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e48a27d-76e1-45f3-87af-c9b306291d25" path="/var/lib/kubelet/pods/0e48a27d-76e1-45f3-87af-c9b306291d25/volumes" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.669291 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.723923 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.808102 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.828262 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.903001 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 14:53:00 crc kubenswrapper[4832]: I0312 14:53:00.974327 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.025905 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.041918 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.092730 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.181370 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.322594 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.341920 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.432534 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.516565 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.535133 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.591869 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.632116 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.708102 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.713062 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.729721 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.857240 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:53:01 crc kubenswrapper[4832]: I0312 14:53:01.972026 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.101620 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.103048 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.127868 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.355800 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.374036 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.380590 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.434007 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.576353 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.706641 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.765854 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 14:53:02 crc kubenswrapper[4832]: I0312 14:53:02.832827 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.021241 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.072618 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.115605 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.155452 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.173866 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.194230 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.332312 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.437818 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.441988 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.480190 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.498221 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.515137 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.516415 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.597375 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.642628 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.649644 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.655208 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.832347 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.876196 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.892574 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.914153 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 14:53:03 crc kubenswrapper[4832]: I0312 14:53:03.945147 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.002085 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.093271 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.319312 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.447746 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.448571 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.450472 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.467862 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.568087 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.619983 4832 scope.go:117] "RemoveContainer" containerID="6d78551ee79ce7edebeb61e89f8682a1bade222303474eccc48172443fedde97" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.647639 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.670776 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.882888 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 14:53:04 crc kubenswrapper[4832]: I0312 14:53:04.912085 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.113036 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.113197 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.274991 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.275038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a2262ecd84895f4708fa92f39b679149fa2783acc8cdee1c2ace5920c7b35e65"} Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.326673 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.386801 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg"] Mar 12 14:53:05 crc kubenswrapper[4832]: E0312 14:53:05.387107 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" containerName="installer" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.387123 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" containerName="installer" Mar 12 14:53:05 crc kubenswrapper[4832]: E0312 14:53:05.387153 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e48a27d-76e1-45f3-87af-c9b306291d25" containerName="oauth-openshift" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.387163 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e48a27d-76e1-45f3-87af-c9b306291d25" containerName="oauth-openshift" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.387304 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e48a27d-76e1-45f3-87af-c9b306291d25" containerName="oauth-openshift" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.387326 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df0cbbc-c142-4d08-ad20-82ef1be6ce5d" containerName="installer" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.388975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.391690 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.391931 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.392036 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.392156 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.392077 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.392095 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.392661 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.392990 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.393995 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.394061 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.394146 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.394242 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.400336 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.404434 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.408542 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.470281 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.516096 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-login\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572487 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572606 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-error\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572632 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a99bb108-962b-46dc-a64c-0485744704a5-audit-dir\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572887 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-session\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.572945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-audit-policies\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.573063 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.573088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.573125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrslr\" (UniqueName: \"kubernetes.io/projected/a99bb108-962b-46dc-a64c-0485744704a5-kube-api-access-hrslr\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.573170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.573237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.573309 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.611234 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.638334 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.673766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.673817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.673838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-login\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.674848 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.674873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675230 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-error\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675270 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a99bb108-962b-46dc-a64c-0485744704a5-audit-dir\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-session\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-audit-policies\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrslr\" (UniqueName: \"kubernetes.io/projected/a99bb108-962b-46dc-a64c-0485744704a5-kube-api-access-hrslr\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675831 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.675877 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a99bb108-962b-46dc-a64c-0485744704a5-audit-dir\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.676115 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-audit-policies\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.676425 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.680260 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.680285 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.680630 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.680921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.681315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-error\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.681587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-user-template-login\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.684846 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-session\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.685416 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a99bb108-962b-46dc-a64c-0485744704a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.692940 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.692944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrslr\" (UniqueName: \"kubernetes.io/projected/a99bb108-962b-46dc-a64c-0485744704a5-kube-api-access-hrslr\") pod \"oauth-openshift-78b7d86cb4-6f2lg\" (UID: \"a99bb108-962b-46dc-a64c-0485744704a5\") " pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.718222 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.736750 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.746226 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.814032 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.833403 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 14:53:05 crc kubenswrapper[4832]: I0312 14:53:05.870741 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.080433 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.083074 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.131460 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.134689 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.238701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.261623 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.379978 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.381745 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.439949 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.502389 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.506491 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.508829 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.514920 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.624342 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.627838 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.628247 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae" gracePeriod=5 Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.756260 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.782381 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.814632 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.933257 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 14:53:06 crc kubenswrapper[4832]: I0312 14:53:06.961108 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.062206 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.068007 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.274184 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.417216 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.444412 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.459018 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.486611 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.515930 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.519990 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.554319 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.570148 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.580964 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.654683 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.668874 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.727180 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.734085 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.748379 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.751694 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.799966 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.841076 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw"] Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.841331 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" podUID="4e67d890-3131-454a-bd9c-89b8212c8842" containerName="controller-manager" containerID="cri-o://de9c4f90ddbe052cfd4cd05767294661898acd498a8fe454fe1ab8bb9953e880" gracePeriod=30 Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.858267 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz"] Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.858495 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" podUID="c275e157-9561-4355-a81b-a14441a2940f" containerName="route-controller-manager" containerID="cri-o://10328ed23acbd83a0c646616f06bb6d6c9c4d23c85f9d3a04231964c145c48c0" gracePeriod=30 Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.907472 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.937122 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 14:53:07 crc kubenswrapper[4832]: I0312 14:53:07.957294 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.031199 4832 patch_prober.go:28] interesting pod/route-controller-manager-7dbf96fbfc-frrqz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.031243 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" podUID="c275e157-9561-4355-a81b-a14441a2940f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.046407 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.098586 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.163950 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.164772 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.173717 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.294264 4832 generic.go:334] "Generic (PLEG): container finished" podID="4e67d890-3131-454a-bd9c-89b8212c8842" containerID="de9c4f90ddbe052cfd4cd05767294661898acd498a8fe454fe1ab8bb9953e880" exitCode=0 Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.294693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" event={"ID":"4e67d890-3131-454a-bd9c-89b8212c8842","Type":"ContainerDied","Data":"de9c4f90ddbe052cfd4cd05767294661898acd498a8fe454fe1ab8bb9953e880"} Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.297124 4832 generic.go:334] "Generic (PLEG): container finished" podID="c275e157-9561-4355-a81b-a14441a2940f" containerID="10328ed23acbd83a0c646616f06bb6d6c9c4d23c85f9d3a04231964c145c48c0" exitCode=0 Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.297175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" event={"ID":"c275e157-9561-4355-a81b-a14441a2940f","Type":"ContainerDied","Data":"10328ed23acbd83a0c646616f06bb6d6c9c4d23c85f9d3a04231964c145c48c0"} Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.320161 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.342709 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.346725 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408260 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5745q\" (UniqueName: \"kubernetes.io/projected/4e67d890-3131-454a-bd9c-89b8212c8842-kube-api-access-5745q\") pod \"4e67d890-3131-454a-bd9c-89b8212c8842\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408321 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p27w4\" (UniqueName: \"kubernetes.io/projected/c275e157-9561-4355-a81b-a14441a2940f-kube-api-access-p27w4\") pod \"c275e157-9561-4355-a81b-a14441a2940f\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-config\") pod \"4e67d890-3131-454a-bd9c-89b8212c8842\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408453 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67d890-3131-454a-bd9c-89b8212c8842-serving-cert\") pod \"4e67d890-3131-454a-bd9c-89b8212c8842\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-proxy-ca-bundles\") pod \"4e67d890-3131-454a-bd9c-89b8212c8842\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-client-ca\") pod \"c275e157-9561-4355-a81b-a14441a2940f\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408550 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c275e157-9561-4355-a81b-a14441a2940f-serving-cert\") pod \"c275e157-9561-4355-a81b-a14441a2940f\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-config\") pod \"c275e157-9561-4355-a81b-a14441a2940f\" (UID: \"c275e157-9561-4355-a81b-a14441a2940f\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.408652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-client-ca\") pod \"4e67d890-3131-454a-bd9c-89b8212c8842\" (UID: \"4e67d890-3131-454a-bd9c-89b8212c8842\") " Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.409255 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c275e157-9561-4355-a81b-a14441a2940f" (UID: "c275e157-9561-4355-a81b-a14441a2940f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.409354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-config" (OuterVolumeSpecName: "config") pod "c275e157-9561-4355-a81b-a14441a2940f" (UID: "c275e157-9561-4355-a81b-a14441a2940f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.409888 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-config" (OuterVolumeSpecName: "config") pod "4e67d890-3131-454a-bd9c-89b8212c8842" (UID: "4e67d890-3131-454a-bd9c-89b8212c8842"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.411219 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4e67d890-3131-454a-bd9c-89b8212c8842" (UID: "4e67d890-3131-454a-bd9c-89b8212c8842"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.411641 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e67d890-3131-454a-bd9c-89b8212c8842" (UID: "4e67d890-3131-454a-bd9c-89b8212c8842"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.413749 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e67d890-3131-454a-bd9c-89b8212c8842-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e67d890-3131-454a-bd9c-89b8212c8842" (UID: "4e67d890-3131-454a-bd9c-89b8212c8842"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.414840 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e67d890-3131-454a-bd9c-89b8212c8842-kube-api-access-5745q" (OuterVolumeSpecName: "kube-api-access-5745q") pod "4e67d890-3131-454a-bd9c-89b8212c8842" (UID: "4e67d890-3131-454a-bd9c-89b8212c8842"). InnerVolumeSpecName "kube-api-access-5745q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.414844 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c275e157-9561-4355-a81b-a14441a2940f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c275e157-9561-4355-a81b-a14441a2940f" (UID: "c275e157-9561-4355-a81b-a14441a2940f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.414952 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c275e157-9561-4355-a81b-a14441a2940f-kube-api-access-p27w4" (OuterVolumeSpecName: "kube-api-access-p27w4") pod "c275e157-9561-4355-a81b-a14441a2940f" (UID: "c275e157-9561-4355-a81b-a14441a2940f"). InnerVolumeSpecName "kube-api-access-p27w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.466244 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510681 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5745q\" (UniqueName: \"kubernetes.io/projected/4e67d890-3131-454a-bd9c-89b8212c8842-kube-api-access-5745q\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510738 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p27w4\" (UniqueName: \"kubernetes.io/projected/c275e157-9561-4355-a81b-a14441a2940f-kube-api-access-p27w4\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510760 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510775 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67d890-3131-454a-bd9c-89b8212c8842-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510790 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510802 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510814 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c275e157-9561-4355-a81b-a14441a2940f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510830 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c275e157-9561-4355-a81b-a14441a2940f-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.510853 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e67d890-3131-454a-bd9c-89b8212c8842-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.530594 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.575763 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg"] Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.661218 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.673928 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.692044 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.713397 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.715209 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.844980 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.938740 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.978515 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg"] Mar 12 14:53:08 crc kubenswrapper[4832]: W0312 14:53:08.983553 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda99bb108_962b_46dc_a64c_0485744704a5.slice/crio-70e5a86eb0e4c044f3206e164f3140e08511ba7db4ef8d22cbf0c9095103e677 WatchSource:0}: Error finding container 70e5a86eb0e4c044f3206e164f3140e08511ba7db4ef8d22cbf0c9095103e677: Status 404 returned error can't find the container with id 70e5a86eb0e4c044f3206e164f3140e08511ba7db4ef8d22cbf0c9095103e677 Mar 12 14:53:08 crc kubenswrapper[4832]: I0312 14:53:08.985302 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.048857 4832 patch_prober.go:28] interesting pod/controller-manager-7cdb5dc6-gjtgw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.048963 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" podUID="4e67d890-3131-454a-bd9c-89b8212c8842" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.151521 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.302838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" event={"ID":"a99bb108-962b-46dc-a64c-0485744704a5","Type":"ContainerStarted","Data":"124ed1e8f2a8800dfc886070385579c15534cfec00e301a9462eaeb130586cdf"} Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.302890 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" event={"ID":"a99bb108-962b-46dc-a64c-0485744704a5","Type":"ContainerStarted","Data":"70e5a86eb0e4c044f3206e164f3140e08511ba7db4ef8d22cbf0c9095103e677"} Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.303114 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.304438 4832 patch_prober.go:28] interesting pod/oauth-openshift-78b7d86cb4-6f2lg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.67:6443/healthz\": dial tcp 10.217.0.67:6443: connect: connection refused" start-of-body= Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.304482 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" podUID="a99bb108-962b-46dc-a64c-0485744704a5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.67:6443/healthz\": dial tcp 10.217.0.67:6443: connect: connection refused" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.305684 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" event={"ID":"c275e157-9561-4355-a81b-a14441a2940f","Type":"ContainerDied","Data":"b2b224f980f9f5dde2650f6247557fac15d141e22f99ced1de999a98f4aa8907"} Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.305718 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.305728 4832 scope.go:117] "RemoveContainer" containerID="10328ed23acbd83a0c646616f06bb6d6c9c4d23c85f9d3a04231964c145c48c0" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.307566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" event={"ID":"4e67d890-3131-454a-bd9c-89b8212c8842","Type":"ContainerDied","Data":"bee30a2fd3f5d8772941403cad733c7295a0114ca90c0acd4a3793de3a70c1ed"} Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.307657 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.319813 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.320272 4832 scope.go:117] "RemoveContainer" containerID="de9c4f90ddbe052cfd4cd05767294661898acd498a8fe454fe1ab8bb9953e880" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.339464 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" podStartSLOduration=51.339447301 podStartE2EDuration="51.339447301s" podCreationTimestamp="2026-03-12 14:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:09.337071968 +0000 UTC m=+347.981086204" watchObservedRunningTime="2026-03-12 14:53:09.339447301 +0000 UTC m=+347.983461537" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.339957 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.352254 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw"] Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.356647 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cdb5dc6-gjtgw"] Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.362770 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz"] Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.365649 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbf96fbfc-frrqz"] Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.397009 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.397675 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.463182 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.481098 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.535296 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.602131 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.691719 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.752388 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.754688 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-pwgsl"] Mar 12 14:53:09 crc kubenswrapper[4832]: E0312 14:53:09.754951 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.754968 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 14:53:09 crc kubenswrapper[4832]: E0312 14:53:09.754984 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c275e157-9561-4355-a81b-a14441a2940f" containerName="route-controller-manager" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.754991 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c275e157-9561-4355-a81b-a14441a2940f" containerName="route-controller-manager" Mar 12 14:53:09 crc kubenswrapper[4832]: E0312 14:53:09.755004 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e67d890-3131-454a-bd9c-89b8212c8842" containerName="controller-manager" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.755010 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e67d890-3131-454a-bd9c-89b8212c8842" containerName="controller-manager" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.755110 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.755123 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e67d890-3131-454a-bd9c-89b8212c8842" containerName="controller-manager" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.755133 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c275e157-9561-4355-a81b-a14441a2940f" containerName="route-controller-manager" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.755477 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.760388 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.762252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.764158 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s"] Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.765153 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.766064 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.772113 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.773991 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.774125 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.774221 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.774365 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.774477 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.774661 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.774829 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.775529 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.784977 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.789003 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s"] Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.791851 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-pwgsl"] Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826221 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-config\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826284 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-serving-cert\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-proxy-ca-bundles\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826335 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q74q\" (UniqueName: \"kubernetes.io/projected/f138191e-f007-42f4-ab88-d8146357484b-kube-api-access-8q74q\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjxs\" (UniqueName: \"kubernetes.io/projected/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-kube-api-access-9xjxs\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-client-ca\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f138191e-f007-42f4-ab88-d8146357484b-serving-cert\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826715 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-config\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.826758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-client-ca\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-serving-cert\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-proxy-ca-bundles\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q74q\" (UniqueName: \"kubernetes.io/projected/f138191e-f007-42f4-ab88-d8146357484b-kube-api-access-8q74q\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjxs\" (UniqueName: \"kubernetes.io/projected/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-kube-api-access-9xjxs\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-client-ca\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f138191e-f007-42f4-ab88-d8146357484b-serving-cert\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928628 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-config\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-client-ca\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.928708 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-config\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.929607 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-proxy-ca-bundles\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.929851 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-client-ca\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.929861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-client-ca\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.930079 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-config\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.930681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-config\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.934695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-serving-cert\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.944608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q74q\" (UniqueName: \"kubernetes.io/projected/f138191e-f007-42f4-ab88-d8146357484b-kube-api-access-8q74q\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.946492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjxs\" (UniqueName: \"kubernetes.io/projected/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-kube-api-access-9xjxs\") pod \"controller-manager-774ffc7588-pwgsl\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.947622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f138191e-f007-42f4-ab88-d8146357484b-serving-cert\") pod \"route-controller-manager-759c849469-n7k9s\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:09 crc kubenswrapper[4832]: I0312 14:53:09.951564 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.065297 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.069818 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.089088 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.171753 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.323241 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-78b7d86cb4-6f2lg" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.359913 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s"] Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.503734 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-pwgsl"] Mar 12 14:53:10 crc kubenswrapper[4832]: W0312 14:53:10.508640 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc470d0f3_da71_4852_a8bf_e98df1a8cfcf.slice/crio-b8d362e2bfd521edf65db31a947520b8a0edb6ecf85b68c63e0b8436fa6ddce6 WatchSource:0}: Error finding container b8d362e2bfd521edf65db31a947520b8a0edb6ecf85b68c63e0b8436fa6ddce6: Status 404 returned error can't find the container with id b8d362e2bfd521edf65db31a947520b8a0edb6ecf85b68c63e0b8436fa6ddce6 Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.537559 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.565825 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.625833 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e67d890-3131-454a-bd9c-89b8212c8842" path="/var/lib/kubelet/pods/4e67d890-3131-454a-bd9c-89b8212c8842/volumes" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.626632 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c275e157-9561-4355-a81b-a14441a2940f" path="/var/lib/kubelet/pods/c275e157-9561-4355-a81b-a14441a2940f/volumes" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.702163 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.708697 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 14:53:10 crc kubenswrapper[4832]: I0312 14:53:10.759316 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.148175 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.148817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.325111 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" event={"ID":"f138191e-f007-42f4-ab88-d8146357484b","Type":"ContainerStarted","Data":"f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe"} Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.325168 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" event={"ID":"f138191e-f007-42f4-ab88-d8146357484b","Type":"ContainerStarted","Data":"a46d69ce5c16049bd63bf687f195f2392511fb3f9db23696e8e9c2b76d9322af"} Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.325471 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.327420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" event={"ID":"c470d0f3-da71-4852-a8bf-e98df1a8cfcf","Type":"ContainerStarted","Data":"2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087"} Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.327442 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" event={"ID":"c470d0f3-da71-4852-a8bf-e98df1a8cfcf","Type":"ContainerStarted","Data":"b8d362e2bfd521edf65db31a947520b8a0edb6ecf85b68c63e0b8436fa6ddce6"} Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.327456 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.331394 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.332616 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.343243 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" podStartSLOduration=4.343222969 podStartE2EDuration="4.343222969s" podCreationTimestamp="2026-03-12 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:11.340459306 +0000 UTC m=+349.984473532" watchObservedRunningTime="2026-03-12 14:53:11.343222969 +0000 UTC m=+349.987237205" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.383613 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" podStartSLOduration=4.383594617 podStartE2EDuration="4.383594617s" podCreationTimestamp="2026-03-12 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:11.379609121 +0000 UTC m=+350.023623367" watchObservedRunningTime="2026-03-12 14:53:11.383594617 +0000 UTC m=+350.027608853" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.542216 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 14:53:11 crc kubenswrapper[4832]: I0312 14:53:11.814264 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.010771 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.222649 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.222737 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263154 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263210 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263199 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263326 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.263869 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.264259 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.264311 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.264325 4832 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.264335 4832 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.271461 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.334089 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.334145 4832 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae" exitCode=137 Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.334212 4832 scope.go:117] "RemoveContainer" containerID="fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.334247 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.352350 4832 scope.go:117] "RemoveContainer" containerID="fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae" Mar 12 14:53:12 crc kubenswrapper[4832]: E0312 14:53:12.352889 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae\": container with ID starting with fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae not found: ID does not exist" containerID="fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.353041 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae"} err="failed to get container status \"fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae\": rpc error: code = NotFound desc = could not find container \"fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae\": container with ID starting with fda0945f53e4c2ede981afab1bbae6e34317fd43c6d289071804fd216e41faae not found: ID does not exist" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.366732 4832 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.628436 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.771138 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 14:53:12 crc kubenswrapper[4832]: I0312 14:53:12.935889 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 14:53:19 crc kubenswrapper[4832]: I0312 14:53:19.082287 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 14:53:25 crc kubenswrapper[4832]: I0312 14:53:25.693862 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-pwgsl"] Mar 12 14:53:25 crc kubenswrapper[4832]: I0312 14:53:25.695120 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" podUID="c470d0f3-da71-4852-a8bf-e98df1a8cfcf" containerName="controller-manager" containerID="cri-o://2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087" gracePeriod=30 Mar 12 14:53:25 crc kubenswrapper[4832]: I0312 14:53:25.708651 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s"] Mar 12 14:53:25 crc kubenswrapper[4832]: I0312 14:53:25.709789 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" podUID="f138191e-f007-42f4-ab88-d8146357484b" containerName="route-controller-manager" containerID="cri-o://f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe" gracePeriod=30 Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.197866 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.201916 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.269846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f138191e-f007-42f4-ab88-d8146357484b-serving-cert\") pod \"f138191e-f007-42f4-ab88-d8146357484b\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.269960 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-config\") pod \"f138191e-f007-42f4-ab88-d8146357484b\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.269994 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xjxs\" (UniqueName: \"kubernetes.io/projected/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-kube-api-access-9xjxs\") pod \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270029 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-serving-cert\") pod \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-client-ca\") pod \"f138191e-f007-42f4-ab88-d8146357484b\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270076 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-config\") pod \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270107 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q74q\" (UniqueName: \"kubernetes.io/projected/f138191e-f007-42f4-ab88-d8146357484b-kube-api-access-8q74q\") pod \"f138191e-f007-42f4-ab88-d8146357484b\" (UID: \"f138191e-f007-42f4-ab88-d8146357484b\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270127 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-proxy-ca-bundles\") pod \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-client-ca\") pod \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\" (UID: \"c470d0f3-da71-4852-a8bf-e98df1a8cfcf\") " Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270750 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c470d0f3-da71-4852-a8bf-e98df1a8cfcf" (UID: "c470d0f3-da71-4852-a8bf-e98df1a8cfcf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270797 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-config" (OuterVolumeSpecName: "config") pod "c470d0f3-da71-4852-a8bf-e98df1a8cfcf" (UID: "c470d0f3-da71-4852-a8bf-e98df1a8cfcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270826 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-client-ca" (OuterVolumeSpecName: "client-ca") pod "c470d0f3-da71-4852-a8bf-e98df1a8cfcf" (UID: "c470d0f3-da71-4852-a8bf-e98df1a8cfcf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270933 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-client-ca" (OuterVolumeSpecName: "client-ca") pod "f138191e-f007-42f4-ab88-d8146357484b" (UID: "f138191e-f007-42f4-ab88-d8146357484b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.270977 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-config" (OuterVolumeSpecName: "config") pod "f138191e-f007-42f4-ab88-d8146357484b" (UID: "f138191e-f007-42f4-ab88-d8146357484b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.271078 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.271096 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.271108 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.271120 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.282542 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f138191e-f007-42f4-ab88-d8146357484b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f138191e-f007-42f4-ab88-d8146357484b" (UID: "f138191e-f007-42f4-ab88-d8146357484b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.282556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-kube-api-access-9xjxs" (OuterVolumeSpecName: "kube-api-access-9xjxs") pod "c470d0f3-da71-4852-a8bf-e98df1a8cfcf" (UID: "c470d0f3-da71-4852-a8bf-e98df1a8cfcf"). InnerVolumeSpecName "kube-api-access-9xjxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.282546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c470d0f3-da71-4852-a8bf-e98df1a8cfcf" (UID: "c470d0f3-da71-4852-a8bf-e98df1a8cfcf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.282537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f138191e-f007-42f4-ab88-d8146357484b-kube-api-access-8q74q" (OuterVolumeSpecName: "kube-api-access-8q74q") pod "f138191e-f007-42f4-ab88-d8146357484b" (UID: "f138191e-f007-42f4-ab88-d8146357484b"). InnerVolumeSpecName "kube-api-access-8q74q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.372490 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q74q\" (UniqueName: \"kubernetes.io/projected/f138191e-f007-42f4-ab88-d8146357484b-kube-api-access-8q74q\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.372552 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f138191e-f007-42f4-ab88-d8146357484b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.372576 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f138191e-f007-42f4-ab88-d8146357484b-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.372596 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xjxs\" (UniqueName: \"kubernetes.io/projected/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-kube-api-access-9xjxs\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.372612 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c470d0f3-da71-4852-a8bf-e98df1a8cfcf-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.409804 4832 generic.go:334] "Generic (PLEG): container finished" podID="f138191e-f007-42f4-ab88-d8146357484b" containerID="f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe" exitCode=0 Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.409867 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" event={"ID":"f138191e-f007-42f4-ab88-d8146357484b","Type":"ContainerDied","Data":"f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe"} Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.409893 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" event={"ID":"f138191e-f007-42f4-ab88-d8146357484b","Type":"ContainerDied","Data":"a46d69ce5c16049bd63bf687f195f2392511fb3f9db23696e8e9c2b76d9322af"} Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.409911 4832 scope.go:117] "RemoveContainer" containerID="f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.410394 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.411676 4832 generic.go:334] "Generic (PLEG): container finished" podID="c470d0f3-da71-4852-a8bf-e98df1a8cfcf" containerID="2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087" exitCode=0 Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.411698 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" event={"ID":"c470d0f3-da71-4852-a8bf-e98df1a8cfcf","Type":"ContainerDied","Data":"2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087"} Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.411713 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" event={"ID":"c470d0f3-da71-4852-a8bf-e98df1a8cfcf","Type":"ContainerDied","Data":"b8d362e2bfd521edf65db31a947520b8a0edb6ecf85b68c63e0b8436fa6ddce6"} Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.411754 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774ffc7588-pwgsl" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.434224 4832 scope.go:117] "RemoveContainer" containerID="f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe" Mar 12 14:53:26 crc kubenswrapper[4832]: E0312 14:53:26.434791 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe\": container with ID starting with f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe not found: ID does not exist" containerID="f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.434846 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe"} err="failed to get container status \"f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe\": rpc error: code = NotFound desc = could not find container \"f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe\": container with ID starting with f7851adbb35092e4f301cc9a495fec76133f855e0571a2977e7049c31a659dfe not found: ID does not exist" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.434884 4832 scope.go:117] "RemoveContainer" containerID="2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.449046 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-pwgsl"] Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.457040 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-pwgsl"] Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.457684 4832 scope.go:117] "RemoveContainer" containerID="2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087" Mar 12 14:53:26 crc kubenswrapper[4832]: E0312 14:53:26.457983 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087\": container with ID starting with 2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087 not found: ID does not exist" containerID="2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.458009 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087"} err="failed to get container status \"2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087\": rpc error: code = NotFound desc = could not find container \"2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087\": container with ID starting with 2d4efccfb149391eca64ad478b9e829631b00205fd3d33664e34a1f7b7a52087 not found: ID does not exist" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.462935 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s"] Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.467135 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-n7k9s"] Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.628029 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c470d0f3-da71-4852-a8bf-e98df1a8cfcf" path="/var/lib/kubelet/pods/c470d0f3-da71-4852-a8bf-e98df1a8cfcf/volumes" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.628756 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f138191e-f007-42f4-ab88-d8146357484b" path="/var/lib/kubelet/pods/f138191e-f007-42f4-ab88-d8146357484b/volumes" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.768338 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c"] Mar 12 14:53:26 crc kubenswrapper[4832]: E0312 14:53:26.768903 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f138191e-f007-42f4-ab88-d8146357484b" containerName="route-controller-manager" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.768926 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f138191e-f007-42f4-ab88-d8146357484b" containerName="route-controller-manager" Mar 12 14:53:26 crc kubenswrapper[4832]: E0312 14:53:26.768943 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c470d0f3-da71-4852-a8bf-e98df1a8cfcf" containerName="controller-manager" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.768954 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c470d0f3-da71-4852-a8bf-e98df1a8cfcf" containerName="controller-manager" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.769272 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c470d0f3-da71-4852-a8bf-e98df1a8cfcf" containerName="controller-manager" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.769364 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f138191e-f007-42f4-ab88-d8146357484b" containerName="route-controller-manager" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.770445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.772350 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bb7c5649b-96w6c"] Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.772951 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.773814 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.774081 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.813256 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.813793 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.813830 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.813986 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.814084 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.814439 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.814745 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.814893 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.815019 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.815141 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.818740 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bb7c5649b-96w6c"] Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.820884 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.821492 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c"] Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890342 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lkv\" (UniqueName: \"kubernetes.io/projected/69895f63-1626-4f7d-b128-ccb3dae950c6-kube-api-access-p4lkv\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hv8l\" (UniqueName: \"kubernetes.io/projected/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-kube-api-access-9hv8l\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890431 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-config\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890524 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-config\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890545 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-client-ca\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890568 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-proxy-ca-bundles\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890614 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-client-ca\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69895f63-1626-4f7d-b128-ccb3dae950c6-serving-cert\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.890710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-serving-cert\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.991862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-serving-cert\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.991917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4lkv\" (UniqueName: \"kubernetes.io/projected/69895f63-1626-4f7d-b128-ccb3dae950c6-kube-api-access-p4lkv\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.991946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hv8l\" (UniqueName: \"kubernetes.io/projected/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-kube-api-access-9hv8l\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.991984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-config\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.992016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-config\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.992040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-client-ca\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.992062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-proxy-ca-bundles\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.992108 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-client-ca\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.992134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69895f63-1626-4f7d-b128-ccb3dae950c6-serving-cert\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.993621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-proxy-ca-bundles\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.993658 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-client-ca\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.993786 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-config\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.993825 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-config\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.994421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-client-ca\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.997438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69895f63-1626-4f7d-b128-ccb3dae950c6-serving-cert\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:26 crc kubenswrapper[4832]: I0312 14:53:26.998881 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-serving-cert\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:27 crc kubenswrapper[4832]: I0312 14:53:27.013384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hv8l\" (UniqueName: \"kubernetes.io/projected/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-kube-api-access-9hv8l\") pod \"route-controller-manager-76d65966c5-9g98c\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:27 crc kubenswrapper[4832]: I0312 14:53:27.014270 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4lkv\" (UniqueName: \"kubernetes.io/projected/69895f63-1626-4f7d-b128-ccb3dae950c6-kube-api-access-p4lkv\") pod \"controller-manager-6bb7c5649b-96w6c\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:27 crc kubenswrapper[4832]: I0312 14:53:27.137776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:27 crc kubenswrapper[4832]: I0312 14:53:27.167265 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:27 crc kubenswrapper[4832]: I0312 14:53:27.550847 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c"] Mar 12 14:53:27 crc kubenswrapper[4832]: I0312 14:53:27.588975 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bb7c5649b-96w6c"] Mar 12 14:53:27 crc kubenswrapper[4832]: W0312 14:53:27.594314 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69895f63_1626_4f7d_b128_ccb3dae950c6.slice/crio-4a87f189a68ca761843939a2c5b12578c6826f62a0b207ae01fcee1985d230ff WatchSource:0}: Error finding container 4a87f189a68ca761843939a2c5b12578c6826f62a0b207ae01fcee1985d230ff: Status 404 returned error can't find the container with id 4a87f189a68ca761843939a2c5b12578c6826f62a0b207ae01fcee1985d230ff Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.425909 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" event={"ID":"69895f63-1626-4f7d-b128-ccb3dae950c6","Type":"ContainerStarted","Data":"e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937"} Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.425981 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" event={"ID":"69895f63-1626-4f7d-b128-ccb3dae950c6","Type":"ContainerStarted","Data":"4a87f189a68ca761843939a2c5b12578c6826f62a0b207ae01fcee1985d230ff"} Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.426004 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.428347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" event={"ID":"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886","Type":"ContainerStarted","Data":"0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263"} Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.428410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" event={"ID":"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886","Type":"ContainerStarted","Data":"67d8b53de0fbbaf6439b90ac640d63a86ac6f8875f3474a375985e6c3a38f06a"} Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.428755 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.431302 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.433628 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.452401 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" podStartSLOduration=3.4523743 podStartE2EDuration="3.4523743s" podCreationTimestamp="2026-03-12 14:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:28.450164681 +0000 UTC m=+367.094178907" watchObservedRunningTime="2026-03-12 14:53:28.4523743 +0000 UTC m=+367.096388536" Mar 12 14:53:28 crc kubenswrapper[4832]: I0312 14:53:28.469299 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" podStartSLOduration=3.469281237 podStartE2EDuration="3.469281237s" podCreationTimestamp="2026-03-12 14:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:28.466110703 +0000 UTC m=+367.110124969" watchObservedRunningTime="2026-03-12 14:53:28.469281237 +0000 UTC m=+367.113295463" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.307639 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xbj67"] Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.309011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.324489 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xbj67"] Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376219 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbqxq\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-kube-api-access-vbqxq\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d555f4ed-996e-41bd-8235-fe97dd9dda46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376354 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-bound-sa-token\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376379 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d555f4ed-996e-41bd-8235-fe97dd9dda46-trusted-ca\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376396 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d555f4ed-996e-41bd-8235-fe97dd9dda46-registry-certificates\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376473 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-registry-tls\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.376522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d555f4ed-996e-41bd-8235-fe97dd9dda46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.405846 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.478353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d555f4ed-996e-41bd-8235-fe97dd9dda46-trusted-ca\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.478423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d555f4ed-996e-41bd-8235-fe97dd9dda46-registry-certificates\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.478498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-registry-tls\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.478550 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d555f4ed-996e-41bd-8235-fe97dd9dda46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.478631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbqxq\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-kube-api-access-vbqxq\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.478670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d555f4ed-996e-41bd-8235-fe97dd9dda46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.478723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-bound-sa-token\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.479433 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d555f4ed-996e-41bd-8235-fe97dd9dda46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.479645 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d555f4ed-996e-41bd-8235-fe97dd9dda46-trusted-ca\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.480740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d555f4ed-996e-41bd-8235-fe97dd9dda46-registry-certificates\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.486038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d555f4ed-996e-41bd-8235-fe97dd9dda46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.486058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-registry-tls\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.494597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbqxq\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-kube-api-access-vbqxq\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.496698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d555f4ed-996e-41bd-8235-fe97dd9dda46-bound-sa-token\") pod \"image-registry-66df7c8f76-xbj67\" (UID: \"d555f4ed-996e-41bd-8235-fe97dd9dda46\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:42 crc kubenswrapper[4832]: I0312 14:53:42.629771 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:43 crc kubenswrapper[4832]: I0312 14:53:43.066404 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xbj67"] Mar 12 14:53:43 crc kubenswrapper[4832]: I0312 14:53:43.517198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" event={"ID":"d555f4ed-996e-41bd-8235-fe97dd9dda46","Type":"ContainerStarted","Data":"794e0c9fc1ecdbf8fe511893b60b140ca4b77d9c37608a7b4d09005401bbf3a5"} Mar 12 14:53:43 crc kubenswrapper[4832]: I0312 14:53:43.517588 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:53:43 crc kubenswrapper[4832]: I0312 14:53:43.517606 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" event={"ID":"d555f4ed-996e-41bd-8235-fe97dd9dda46","Type":"ContainerStarted","Data":"13258ce6050a94b8026105f0904be5b675a24d5b11a6f7043d03bf62f27c5c4a"} Mar 12 14:53:43 crc kubenswrapper[4832]: I0312 14:53:43.538921 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" podStartSLOduration=1.538904736 podStartE2EDuration="1.538904736s" podCreationTimestamp="2026-03-12 14:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:43.535832495 +0000 UTC m=+382.179846721" watchObservedRunningTime="2026-03-12 14:53:43.538904736 +0000 UTC m=+382.182918962" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.162443 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555454-d5dp7"] Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.163714 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-d5dp7" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.165485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.166166 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.167252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.171683 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-d5dp7"] Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.321933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8knc\" (UniqueName: \"kubernetes.io/projected/ae915dcc-7514-4c61-b873-be6de981b06e-kube-api-access-r8knc\") pod \"auto-csr-approver-29555454-d5dp7\" (UID: \"ae915dcc-7514-4c61-b873-be6de981b06e\") " pod="openshift-infra/auto-csr-approver-29555454-d5dp7" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.423451 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8knc\" (UniqueName: \"kubernetes.io/projected/ae915dcc-7514-4c61-b873-be6de981b06e-kube-api-access-r8knc\") pod \"auto-csr-approver-29555454-d5dp7\" (UID: \"ae915dcc-7514-4c61-b873-be6de981b06e\") " pod="openshift-infra/auto-csr-approver-29555454-d5dp7" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.442194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8knc\" (UniqueName: \"kubernetes.io/projected/ae915dcc-7514-4c61-b873-be6de981b06e-kube-api-access-r8knc\") pod \"auto-csr-approver-29555454-d5dp7\" (UID: \"ae915dcc-7514-4c61-b873-be6de981b06e\") " pod="openshift-infra/auto-csr-approver-29555454-d5dp7" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.518665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-d5dp7" Mar 12 14:54:00 crc kubenswrapper[4832]: I0312 14:54:00.994464 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-d5dp7"] Mar 12 14:54:00 crc kubenswrapper[4832]: W0312 14:54:00.998059 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae915dcc_7514_4c61_b873_be6de981b06e.slice/crio-d7e5f896dac5393ea66dde5b4201f57e9f91e94c6e3421f5c672bbe48835328c WatchSource:0}: Error finding container d7e5f896dac5393ea66dde5b4201f57e9f91e94c6e3421f5c672bbe48835328c: Status 404 returned error can't find the container with id d7e5f896dac5393ea66dde5b4201f57e9f91e94c6e3421f5c672bbe48835328c Mar 12 14:54:01 crc kubenswrapper[4832]: I0312 14:54:01.641728 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555454-d5dp7" event={"ID":"ae915dcc-7514-4c61-b873-be6de981b06e","Type":"ContainerStarted","Data":"d7e5f896dac5393ea66dde5b4201f57e9f91e94c6e3421f5c672bbe48835328c"} Mar 12 14:54:02 crc kubenswrapper[4832]: I0312 14:54:02.635416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xbj67" Mar 12 14:54:02 crc kubenswrapper[4832]: I0312 14:54:02.648626 4832 generic.go:334] "Generic (PLEG): container finished" podID="ae915dcc-7514-4c61-b873-be6de981b06e" containerID="2907439dd9e5ace54963327600bfbddd9b6de6e6937b1af187041060df6dfbe4" exitCode=0 Mar 12 14:54:02 crc kubenswrapper[4832]: I0312 14:54:02.648663 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555454-d5dp7" event={"ID":"ae915dcc-7514-4c61-b873-be6de981b06e","Type":"ContainerDied","Data":"2907439dd9e5ace54963327600bfbddd9b6de6e6937b1af187041060df6dfbe4"} Mar 12 14:54:02 crc kubenswrapper[4832]: I0312 14:54:02.696605 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4tg76"] Mar 12 14:54:03 crc kubenswrapper[4832]: I0312 14:54:03.980257 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-d5dp7" Mar 12 14:54:04 crc kubenswrapper[4832]: I0312 14:54:04.103490 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8knc\" (UniqueName: \"kubernetes.io/projected/ae915dcc-7514-4c61-b873-be6de981b06e-kube-api-access-r8knc\") pod \"ae915dcc-7514-4c61-b873-be6de981b06e\" (UID: \"ae915dcc-7514-4c61-b873-be6de981b06e\") " Mar 12 14:54:04 crc kubenswrapper[4832]: I0312 14:54:04.110716 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae915dcc-7514-4c61-b873-be6de981b06e-kube-api-access-r8knc" (OuterVolumeSpecName: "kube-api-access-r8knc") pod "ae915dcc-7514-4c61-b873-be6de981b06e" (UID: "ae915dcc-7514-4c61-b873-be6de981b06e"). InnerVolumeSpecName "kube-api-access-r8knc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:04 crc kubenswrapper[4832]: I0312 14:54:04.205619 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8knc\" (UniqueName: \"kubernetes.io/projected/ae915dcc-7514-4c61-b873-be6de981b06e-kube-api-access-r8knc\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:04 crc kubenswrapper[4832]: I0312 14:54:04.660198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555454-d5dp7" event={"ID":"ae915dcc-7514-4c61-b873-be6de981b06e","Type":"ContainerDied","Data":"d7e5f896dac5393ea66dde5b4201f57e9f91e94c6e3421f5c672bbe48835328c"} Mar 12 14:54:04 crc kubenswrapper[4832]: I0312 14:54:04.660242 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e5f896dac5393ea66dde5b4201f57e9f91e94c6e3421f5c672bbe48835328c" Mar 12 14:54:04 crc kubenswrapper[4832]: I0312 14:54:04.660258 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-d5dp7" Mar 12 14:54:05 crc kubenswrapper[4832]: I0312 14:54:05.689731 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb7c5649b-96w6c"] Mar 12 14:54:05 crc kubenswrapper[4832]: I0312 14:54:05.689974 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" podUID="69895f63-1626-4f7d-b128-ccb3dae950c6" containerName="controller-manager" containerID="cri-o://e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937" gracePeriod=30 Mar 12 14:54:05 crc kubenswrapper[4832]: I0312 14:54:05.698522 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c"] Mar 12 14:54:05 crc kubenswrapper[4832]: I0312 14:54:05.698730 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" podUID="82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" containerName="route-controller-manager" containerID="cri-o://0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263" gracePeriod=30 Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.059726 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.148484 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.234926 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hv8l\" (UniqueName: \"kubernetes.io/projected/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-kube-api-access-9hv8l\") pod \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.234998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-client-ca\") pod \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.235023 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-config\") pod \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.235080 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-serving-cert\") pod \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\" (UID: \"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.236144 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-client-ca" (OuterVolumeSpecName: "client-ca") pod "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" (UID: "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.236265 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-config" (OuterVolumeSpecName: "config") pod "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" (UID: "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.239050 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-kube-api-access-9hv8l" (OuterVolumeSpecName: "kube-api-access-9hv8l") pod "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" (UID: "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886"). InnerVolumeSpecName "kube-api-access-9hv8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.240685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" (UID: "82682bb7-4d36-4fbd-8bfc-cf3d5eda5886"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336028 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-config\") pod \"69895f63-1626-4f7d-b128-ccb3dae950c6\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336095 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69895f63-1626-4f7d-b128-ccb3dae950c6-serving-cert\") pod \"69895f63-1626-4f7d-b128-ccb3dae950c6\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336136 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4lkv\" (UniqueName: \"kubernetes.io/projected/69895f63-1626-4f7d-b128-ccb3dae950c6-kube-api-access-p4lkv\") pod \"69895f63-1626-4f7d-b128-ccb3dae950c6\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336152 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-proxy-ca-bundles\") pod \"69895f63-1626-4f7d-b128-ccb3dae950c6\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336168 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-client-ca\") pod \"69895f63-1626-4f7d-b128-ccb3dae950c6\" (UID: \"69895f63-1626-4f7d-b128-ccb3dae950c6\") " Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336401 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336422 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hv8l\" (UniqueName: \"kubernetes.io/projected/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-kube-api-access-9hv8l\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336432 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.336440 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.337355 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-config" (OuterVolumeSpecName: "config") pod "69895f63-1626-4f7d-b128-ccb3dae950c6" (UID: "69895f63-1626-4f7d-b128-ccb3dae950c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.337751 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "69895f63-1626-4f7d-b128-ccb3dae950c6" (UID: "69895f63-1626-4f7d-b128-ccb3dae950c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.337747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "69895f63-1626-4f7d-b128-ccb3dae950c6" (UID: "69895f63-1626-4f7d-b128-ccb3dae950c6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.341928 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69895f63-1626-4f7d-b128-ccb3dae950c6-kube-api-access-p4lkv" (OuterVolumeSpecName: "kube-api-access-p4lkv") pod "69895f63-1626-4f7d-b128-ccb3dae950c6" (UID: "69895f63-1626-4f7d-b128-ccb3dae950c6"). InnerVolumeSpecName "kube-api-access-p4lkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.346034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69895f63-1626-4f7d-b128-ccb3dae950c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "69895f63-1626-4f7d-b128-ccb3dae950c6" (UID: "69895f63-1626-4f7d-b128-ccb3dae950c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.437107 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-config\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.437153 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69895f63-1626-4f7d-b128-ccb3dae950c6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.437186 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4lkv\" (UniqueName: \"kubernetes.io/projected/69895f63-1626-4f7d-b128-ccb3dae950c6-kube-api-access-p4lkv\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.437198 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.437206 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69895f63-1626-4f7d-b128-ccb3dae950c6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.684985 4832 generic.go:334] "Generic (PLEG): container finished" podID="82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" containerID="0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263" exitCode=0 Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.685042 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.685045 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" event={"ID":"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886","Type":"ContainerDied","Data":"0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263"} Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.685152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c" event={"ID":"82682bb7-4d36-4fbd-8bfc-cf3d5eda5886","Type":"ContainerDied","Data":"67d8b53de0fbbaf6439b90ac640d63a86ac6f8875f3474a375985e6c3a38f06a"} Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.685168 4832 scope.go:117] "RemoveContainer" containerID="0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.686863 4832 generic.go:334] "Generic (PLEG): container finished" podID="69895f63-1626-4f7d-b128-ccb3dae950c6" containerID="e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937" exitCode=0 Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.686890 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" event={"ID":"69895f63-1626-4f7d-b128-ccb3dae950c6","Type":"ContainerDied","Data":"e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937"} Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.686926 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" event={"ID":"69895f63-1626-4f7d-b128-ccb3dae950c6","Type":"ContainerDied","Data":"4a87f189a68ca761843939a2c5b12578c6826f62a0b207ae01fcee1985d230ff"} Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.686900 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb7c5649b-96w6c" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.715344 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb7c5649b-96w6c"] Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.717015 4832 scope.go:117] "RemoveContainer" containerID="0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263" Mar 12 14:54:06 crc kubenswrapper[4832]: E0312 14:54:06.717610 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263\": container with ID starting with 0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263 not found: ID does not exist" containerID="0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.717662 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263"} err="failed to get container status \"0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263\": rpc error: code = NotFound desc = could not find container \"0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263\": container with ID starting with 0d673943059a381d0cf244a5a22457d9222c3dc08c9b40b8dc401de31a19e263 not found: ID does not exist" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.717701 4832 scope.go:117] "RemoveContainer" containerID="e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.723104 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bb7c5649b-96w6c"] Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.726892 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c"] Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.730759 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d65966c5-9g98c"] Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.734405 4832 scope.go:117] "RemoveContainer" containerID="e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937" Mar 12 14:54:06 crc kubenswrapper[4832]: E0312 14:54:06.734828 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937\": container with ID starting with e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937 not found: ID does not exist" containerID="e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.734873 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937"} err="failed to get container status \"e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937\": rpc error: code = NotFound desc = could not find container \"e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937\": container with ID starting with e5ea9ca15c1571d11932ae4d5949473b2b0230b468e324aafc8f8db421e21937 not found: ID does not exist" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.789310 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-zk74z"] Mar 12 14:54:06 crc kubenswrapper[4832]: E0312 14:54:06.789599 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae915dcc-7514-4c61-b873-be6de981b06e" containerName="oc" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.789620 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae915dcc-7514-4c61-b873-be6de981b06e" containerName="oc" Mar 12 14:54:06 crc kubenswrapper[4832]: E0312 14:54:06.789638 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69895f63-1626-4f7d-b128-ccb3dae950c6" containerName="controller-manager" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.789645 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="69895f63-1626-4f7d-b128-ccb3dae950c6" containerName="controller-manager" Mar 12 14:54:06 crc kubenswrapper[4832]: E0312 14:54:06.789655 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" containerName="route-controller-manager" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.789661 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" containerName="route-controller-manager" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.789741 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" containerName="route-controller-manager" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.789753 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae915dcc-7514-4c61-b873-be6de981b06e" containerName="oc" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.789762 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="69895f63-1626-4f7d-b128-ccb3dae950c6" containerName="controller-manager" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.790151 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.792473 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.792581 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.792473 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.793957 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.794262 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.794279 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.803449 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-zk74z"] Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.804290 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.942730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbfn\" (UniqueName: \"kubernetes.io/projected/9e477dad-0eb3-44fc-8d79-42f7599392de-kube-api-access-btbfn\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.942791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-client-ca\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.942816 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-config\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.942843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e477dad-0eb3-44fc-8d79-42f7599392de-serving-cert\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:06 crc kubenswrapper[4832]: I0312 14:54:06.942858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-proxy-ca-bundles\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.044047 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btbfn\" (UniqueName: \"kubernetes.io/projected/9e477dad-0eb3-44fc-8d79-42f7599392de-kube-api-access-btbfn\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.044117 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-client-ca\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.044144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-config\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.044168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e477dad-0eb3-44fc-8d79-42f7599392de-serving-cert\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.044189 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-proxy-ca-bundles\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.046157 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-proxy-ca-bundles\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.046236 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-client-ca\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.046650 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e477dad-0eb3-44fc-8d79-42f7599392de-config\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.050773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e477dad-0eb3-44fc-8d79-42f7599392de-serving-cert\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.066349 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btbfn\" (UniqueName: \"kubernetes.io/projected/9e477dad-0eb3-44fc-8d79-42f7599392de-kube-api-access-btbfn\") pod \"controller-manager-774ffc7588-zk74z\" (UID: \"9e477dad-0eb3-44fc-8d79-42f7599392de\") " pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.142663 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.335706 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774ffc7588-zk74z"] Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.693743 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" event={"ID":"9e477dad-0eb3-44fc-8d79-42f7599392de","Type":"ContainerStarted","Data":"7cd44d41b7c4f08656680d461bea01d6aa29d4484f9bb63226aacc70463053bc"} Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.694047 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" event={"ID":"9e477dad-0eb3-44fc-8d79-42f7599392de","Type":"ContainerStarted","Data":"aa69e8fb0d59ccf132431c30c4f66170c6d98720ae9c23941df6ec60d59d08fa"} Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.694064 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.698251 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.708959 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-774ffc7588-zk74z" podStartSLOduration=2.708942689 podStartE2EDuration="2.708942689s" podCreationTimestamp="2026-03-12 14:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:54:07.708484426 +0000 UTC m=+406.352498662" watchObservedRunningTime="2026-03-12 14:54:07.708942689 +0000 UTC m=+406.352956915" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.793215 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-b68tr"] Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.794013 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.797752 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.799062 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.799191 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.799320 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.799366 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.799439 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.806696 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-b68tr"] Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.960766 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-serving-cert\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.960871 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7bd\" (UniqueName: \"kubernetes.io/projected/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-kube-api-access-hq7bd\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.960914 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-client-ca\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:07 crc kubenswrapper[4832]: I0312 14:54:07.961002 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-config\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.062942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7bd\" (UniqueName: \"kubernetes.io/projected/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-kube-api-access-hq7bd\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.063544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-client-ca\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.063660 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-config\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.063700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-serving-cert\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.064817 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-client-ca\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.065899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-config\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.078013 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-serving-cert\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.083978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7bd\" (UniqueName: \"kubernetes.io/projected/057ab7f8-0be6-40f4-a6c0-978e54a46c4b-kube-api-access-hq7bd\") pod \"route-controller-manager-759c849469-b68tr\" (UID: \"057ab7f8-0be6-40f4-a6c0-978e54a46c4b\") " pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.109848 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.506536 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759c849469-b68tr"] Mar 12 14:54:08 crc kubenswrapper[4832]: W0312 14:54:08.513281 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod057ab7f8_0be6_40f4_a6c0_978e54a46c4b.slice/crio-d5b4f95a7e14b9c7bc8b0c70f8baac4e5d789ee182480a7c889ab6542ae5fa44 WatchSource:0}: Error finding container d5b4f95a7e14b9c7bc8b0c70f8baac4e5d789ee182480a7c889ab6542ae5fa44: Status 404 returned error can't find the container with id d5b4f95a7e14b9c7bc8b0c70f8baac4e5d789ee182480a7c889ab6542ae5fa44 Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.632152 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69895f63-1626-4f7d-b128-ccb3dae950c6" path="/var/lib/kubelet/pods/69895f63-1626-4f7d-b128-ccb3dae950c6/volumes" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.633389 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82682bb7-4d36-4fbd-8bfc-cf3d5eda5886" path="/var/lib/kubelet/pods/82682bb7-4d36-4fbd-8bfc-cf3d5eda5886/volumes" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.702811 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" event={"ID":"057ab7f8-0be6-40f4-a6c0-978e54a46c4b","Type":"ContainerStarted","Data":"f365e4a77185d4050be9c9f583284509dc8402311fc6351376ecaf4d0eb918ea"} Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.702863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" event={"ID":"057ab7f8-0be6-40f4-a6c0-978e54a46c4b","Type":"ContainerStarted","Data":"d5b4f95a7e14b9c7bc8b0c70f8baac4e5d789ee182480a7c889ab6542ae5fa44"} Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.703201 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:08 crc kubenswrapper[4832]: I0312 14:54:08.730127 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" podStartSLOduration=3.730110496 podStartE2EDuration="3.730110496s" podCreationTimestamp="2026-03-12 14:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:54:08.728905091 +0000 UTC m=+407.372919397" watchObservedRunningTime="2026-03-12 14:54:08.730110496 +0000 UTC m=+407.374124722" Mar 12 14:54:09 crc kubenswrapper[4832]: I0312 14:54:09.171625 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-759c849469-b68tr" Mar 12 14:54:26 crc kubenswrapper[4832]: I0312 14:54:26.314174 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:54:26 crc kubenswrapper[4832]: I0312 14:54:26.315004 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:54:27 crc kubenswrapper[4832]: I0312 14:54:27.730309 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" podUID="4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" containerName="registry" containerID="cri-o://7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768" gracePeriod=30 Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.175487 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.253622 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-tls\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.253673 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-ca-trust-extracted\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.253698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-trusted-ca\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.253718 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-certificates\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.253755 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-bound-sa-token\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.253771 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skhsj\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-kube-api-access-skhsj\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.253791 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-installation-pull-secrets\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.254142 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\" (UID: \"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d\") " Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.254820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.255358 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.260153 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.260272 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.265014 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.265226 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-kube-api-access-skhsj" (OuterVolumeSpecName: "kube-api-access-skhsj") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "kube-api-access-skhsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.266284 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.278649 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" (UID: "4b37b50d-5624-4bc3-85c2-2d5ae148ba0d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.356265 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.356312 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.356325 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skhsj\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-kube-api-access-skhsj\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.356337 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.356349 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.356360 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.356370 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.822082 4832 generic.go:334] "Generic (PLEG): container finished" podID="4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" containerID="7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768" exitCode=0 Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.822123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" event={"ID":"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d","Type":"ContainerDied","Data":"7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768"} Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.822142 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.822161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4tg76" event={"ID":"4b37b50d-5624-4bc3-85c2-2d5ae148ba0d","Type":"ContainerDied","Data":"bdc68df9146d3e683128a1312418893dc21b89872c29b71ba826df2152f2ab94"} Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.822184 4832 scope.go:117] "RemoveContainer" containerID="7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.840587 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4tg76"] Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.844085 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4tg76"] Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.848104 4832 scope.go:117] "RemoveContainer" containerID="7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768" Mar 12 14:54:28 crc kubenswrapper[4832]: E0312 14:54:28.848799 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768\": container with ID starting with 7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768 not found: ID does not exist" containerID="7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768" Mar 12 14:54:28 crc kubenswrapper[4832]: I0312 14:54:28.848848 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768"} err="failed to get container status \"7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768\": rpc error: code = NotFound desc = could not find container \"7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768\": container with ID starting with 7784eb298c82c334f20815f8a58c6035e8ef6ed5e0cf1b17d79b8062ce9ef768 not found: ID does not exist" Mar 12 14:54:30 crc kubenswrapper[4832]: I0312 14:54:30.636082 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" path="/var/lib/kubelet/pods/4b37b50d-5624-4bc3-85c2-2d5ae148ba0d/volumes" Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.832607 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qbwsc"] Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.833426 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qbwsc" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="registry-server" containerID="cri-o://9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957" gracePeriod=30 Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.837199 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xql8n"] Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.837530 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xql8n" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="registry-server" containerID="cri-o://20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7" gracePeriod=30 Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.841591 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ph4hc"] Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.841791 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerName="marketplace-operator" containerID="cri-o://fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8" gracePeriod=30 Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.858264 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnxm6"] Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.858559 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xnxm6" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="registry-server" containerID="cri-o://9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88" gracePeriod=30 Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.877596 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7pf4z"] Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.877931 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2p9wx"] Mar 12 14:54:39 crc kubenswrapper[4832]: E0312 14:54:39.878233 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" containerName="registry" Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.878261 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" containerName="registry" Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.878379 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b37b50d-5624-4bc3-85c2-2d5ae148ba0d" containerName="registry" Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.878860 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.878988 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7pf4z" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="registry-server" containerID="cri-o://b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee" gracePeriod=30 Mar 12 14:54:39 crc kubenswrapper[4832]: I0312 14:54:39.882638 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2p9wx"] Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.003037 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fe789bb-6979-470b-96cc-e07a65463ecb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.003113 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fe789bb-6979-470b-96cc-e07a65463ecb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.003154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2sj\" (UniqueName: \"kubernetes.io/projected/1fe789bb-6979-470b-96cc-e07a65463ecb-kube-api-access-lg2sj\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.104308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg2sj\" (UniqueName: \"kubernetes.io/projected/1fe789bb-6979-470b-96cc-e07a65463ecb-kube-api-access-lg2sj\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.104405 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fe789bb-6979-470b-96cc-e07a65463ecb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.104439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fe789bb-6979-470b-96cc-e07a65463ecb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.106597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fe789bb-6979-470b-96cc-e07a65463ecb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.110164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fe789bb-6979-470b-96cc-e07a65463ecb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.121997 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg2sj\" (UniqueName: \"kubernetes.io/projected/1fe789bb-6979-470b-96cc-e07a65463ecb-kube-api-access-lg2sj\") pod \"marketplace-operator-79b997595-2p9wx\" (UID: \"1fe789bb-6979-470b-96cc-e07a65463ecb\") " pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.281710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.344318 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.508730 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-utilities\") pod \"7fdc1c63-8a73-405f-aede-75834651cccc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.508882 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-catalog-content\") pod \"7fdc1c63-8a73-405f-aede-75834651cccc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.508931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59fzc\" (UniqueName: \"kubernetes.io/projected/7fdc1c63-8a73-405f-aede-75834651cccc-kube-api-access-59fzc\") pod \"7fdc1c63-8a73-405f-aede-75834651cccc\" (UID: \"7fdc1c63-8a73-405f-aede-75834651cccc\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.510704 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-utilities" (OuterVolumeSpecName: "utilities") pod "7fdc1c63-8a73-405f-aede-75834651cccc" (UID: "7fdc1c63-8a73-405f-aede-75834651cccc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.514132 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdc1c63-8a73-405f-aede-75834651cccc-kube-api-access-59fzc" (OuterVolumeSpecName: "kube-api-access-59fzc") pod "7fdc1c63-8a73-405f-aede-75834651cccc" (UID: "7fdc1c63-8a73-405f-aede-75834651cccc"). InnerVolumeSpecName "kube-api-access-59fzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.556194 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.560545 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.575098 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.576475 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.611270 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59fzc\" (UniqueName: \"kubernetes.io/projected/7fdc1c63-8a73-405f-aede-75834651cccc-kube-api-access-59fzc\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.611301 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.618345 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fdc1c63-8a73-405f-aede-75834651cccc" (UID: "7fdc1c63-8a73-405f-aede-75834651cccc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712009 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxdx7\" (UniqueName: \"kubernetes.io/projected/31435028-adc4-4b77-85d3-5d7659cd80f0-kube-api-access-zxdx7\") pod \"31435028-adc4-4b77-85d3-5d7659cd80f0\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-catalog-content\") pod \"986f5b8c-a467-455c-9b4c-e53572535143\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712098 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv7gl\" (UniqueName: \"kubernetes.io/projected/945e7b12-b4c7-45e9-9956-6dda3eed3c62-kube-api-access-kv7gl\") pod \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712124 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-utilities\") pod \"17368088-aec0-4319-8575-045b54487a1f\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712145 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-utilities\") pod \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712179 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-utilities\") pod \"986f5b8c-a467-455c-9b4c-e53572535143\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-catalog-content\") pod \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\" (UID: \"945e7b12-b4c7-45e9-9956-6dda3eed3c62\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712238 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-catalog-content\") pod \"17368088-aec0-4319-8575-045b54487a1f\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712282 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-trusted-ca\") pod \"31435028-adc4-4b77-85d3-5d7659cd80f0\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712306 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmcgq\" (UniqueName: \"kubernetes.io/projected/17368088-aec0-4319-8575-045b54487a1f-kube-api-access-xmcgq\") pod \"17368088-aec0-4319-8575-045b54487a1f\" (UID: \"17368088-aec0-4319-8575-045b54487a1f\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712325 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-operator-metrics\") pod \"31435028-adc4-4b77-85d3-5d7659cd80f0\" (UID: \"31435028-adc4-4b77-85d3-5d7659cd80f0\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712345 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jgpj\" (UniqueName: \"kubernetes.io/projected/986f5b8c-a467-455c-9b4c-e53572535143-kube-api-access-5jgpj\") pod \"986f5b8c-a467-455c-9b4c-e53572535143\" (UID: \"986f5b8c-a467-455c-9b4c-e53572535143\") " Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.712626 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc1c63-8a73-405f-aede-75834651cccc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.713376 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-utilities" (OuterVolumeSpecName: "utilities") pod "986f5b8c-a467-455c-9b4c-e53572535143" (UID: "986f5b8c-a467-455c-9b4c-e53572535143"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.713578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-utilities" (OuterVolumeSpecName: "utilities") pod "17368088-aec0-4319-8575-045b54487a1f" (UID: "17368088-aec0-4319-8575-045b54487a1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.713587 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-utilities" (OuterVolumeSpecName: "utilities") pod "945e7b12-b4c7-45e9-9956-6dda3eed3c62" (UID: "945e7b12-b4c7-45e9-9956-6dda3eed3c62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.714275 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "31435028-adc4-4b77-85d3-5d7659cd80f0" (UID: "31435028-adc4-4b77-85d3-5d7659cd80f0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.729596 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945e7b12-b4c7-45e9-9956-6dda3eed3c62-kube-api-access-kv7gl" (OuterVolumeSpecName: "kube-api-access-kv7gl") pod "945e7b12-b4c7-45e9-9956-6dda3eed3c62" (UID: "945e7b12-b4c7-45e9-9956-6dda3eed3c62"). InnerVolumeSpecName "kube-api-access-kv7gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.731664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31435028-adc4-4b77-85d3-5d7659cd80f0-kube-api-access-zxdx7" (OuterVolumeSpecName: "kube-api-access-zxdx7") pod "31435028-adc4-4b77-85d3-5d7659cd80f0" (UID: "31435028-adc4-4b77-85d3-5d7659cd80f0"). InnerVolumeSpecName "kube-api-access-zxdx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.737640 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "31435028-adc4-4b77-85d3-5d7659cd80f0" (UID: "31435028-adc4-4b77-85d3-5d7659cd80f0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.738048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17368088-aec0-4319-8575-045b54487a1f-kube-api-access-xmcgq" (OuterVolumeSpecName: "kube-api-access-xmcgq") pod "17368088-aec0-4319-8575-045b54487a1f" (UID: "17368088-aec0-4319-8575-045b54487a1f"). InnerVolumeSpecName "kube-api-access-xmcgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.740990 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986f5b8c-a467-455c-9b4c-e53572535143-kube-api-access-5jgpj" (OuterVolumeSpecName: "kube-api-access-5jgpj") pod "986f5b8c-a467-455c-9b4c-e53572535143" (UID: "986f5b8c-a467-455c-9b4c-e53572535143"). InnerVolumeSpecName "kube-api-access-5jgpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.758782 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "986f5b8c-a467-455c-9b4c-e53572535143" (UID: "986f5b8c-a467-455c-9b4c-e53572535143"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.781213 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17368088-aec0-4319-8575-045b54487a1f" (UID: "17368088-aec0-4319-8575-045b54487a1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814057 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814089 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmcgq\" (UniqueName: \"kubernetes.io/projected/17368088-aec0-4319-8575-045b54487a1f-kube-api-access-xmcgq\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814100 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31435028-adc4-4b77-85d3-5d7659cd80f0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814109 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jgpj\" (UniqueName: \"kubernetes.io/projected/986f5b8c-a467-455c-9b4c-e53572535143-kube-api-access-5jgpj\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814117 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxdx7\" (UniqueName: \"kubernetes.io/projected/31435028-adc4-4b77-85d3-5d7659cd80f0-kube-api-access-zxdx7\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814125 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814133 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv7gl\" (UniqueName: \"kubernetes.io/projected/945e7b12-b4c7-45e9-9956-6dda3eed3c62-kube-api-access-kv7gl\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814171 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814180 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814188 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/986f5b8c-a467-455c-9b4c-e53572535143-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.814196 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17368088-aec0-4319-8575-045b54487a1f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.859950 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "945e7b12-b4c7-45e9-9956-6dda3eed3c62" (UID: "945e7b12-b4c7-45e9-9956-6dda3eed3c62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.904578 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2p9wx"] Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.909640 4832 generic.go:334] "Generic (PLEG): container finished" podID="7fdc1c63-8a73-405f-aede-75834651cccc" containerID="9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957" exitCode=0 Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.909708 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qbwsc" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.909713 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qbwsc" event={"ID":"7fdc1c63-8a73-405f-aede-75834651cccc","Type":"ContainerDied","Data":"9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.909805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qbwsc" event={"ID":"7fdc1c63-8a73-405f-aede-75834651cccc","Type":"ContainerDied","Data":"e0f8b68fd71abea03263ac1818eee27790415b3cceb52d56a13401421334b60b"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.909823 4832 scope.go:117] "RemoveContainer" containerID="9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.912948 4832 generic.go:334] "Generic (PLEG): container finished" podID="986f5b8c-a467-455c-9b4c-e53572535143" containerID="9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88" exitCode=0 Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.913022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnxm6" event={"ID":"986f5b8c-a467-455c-9b4c-e53572535143","Type":"ContainerDied","Data":"9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.913101 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnxm6" event={"ID":"986f5b8c-a467-455c-9b4c-e53572535143","Type":"ContainerDied","Data":"2757f6911bf4d1355090a38d007e05f7f4c893aecddcefef4a92310f761707e7"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.913065 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnxm6" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.917297 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945e7b12-b4c7-45e9-9956-6dda3eed3c62-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.919408 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xql8n" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.919422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xql8n" event={"ID":"17368088-aec0-4319-8575-045b54487a1f","Type":"ContainerDied","Data":"20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.920699 4832 generic.go:334] "Generic (PLEG): container finished" podID="17368088-aec0-4319-8575-045b54487a1f" containerID="20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7" exitCode=0 Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.920829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xql8n" event={"ID":"17368088-aec0-4319-8575-045b54487a1f","Type":"ContainerDied","Data":"5a8254018cef9fa4d972ebbeaae47a499e5ac0277b97edde5f18d82471cef533"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.930037 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qbwsc"] Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.930385 4832 generic.go:334] "Generic (PLEG): container finished" podID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerID="b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee" exitCode=0 Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.930541 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pf4z" event={"ID":"945e7b12-b4c7-45e9-9956-6dda3eed3c62","Type":"ContainerDied","Data":"b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.930585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pf4z" event={"ID":"945e7b12-b4c7-45e9-9956-6dda3eed3c62","Type":"ContainerDied","Data":"726a4d754524f683074e0bc1ed4cd008f67a4917dcda3d5084f2be846f97aabd"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.930707 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pf4z" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.934726 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qbwsc"] Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.935749 4832 generic.go:334] "Generic (PLEG): container finished" podID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerID="fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8" exitCode=0 Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.935781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" event={"ID":"31435028-adc4-4b77-85d3-5d7659cd80f0","Type":"ContainerDied","Data":"fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.935801 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" event={"ID":"31435028-adc4-4b77-85d3-5d7659cd80f0","Type":"ContainerDied","Data":"a874895a1d7d77115fe598cc50f027d73dca9e0892502f388992056228614caf"} Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.935866 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ph4hc" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.943738 4832 scope.go:117] "RemoveContainer" containerID="cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd" Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.962455 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnxm6"] Mar 12 14:54:40 crc kubenswrapper[4832]: I0312 14:54:40.966109 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnxm6"] Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.026570 4832 scope.go:117] "RemoveContainer" containerID="dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.043342 4832 scope.go:117] "RemoveContainer" containerID="9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.043691 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957\": container with ID starting with 9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957 not found: ID does not exist" containerID="9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.043741 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957"} err="failed to get container status \"9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957\": rpc error: code = NotFound desc = could not find container \"9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957\": container with ID starting with 9ee9b29a44cc7b2ed6d165449d32ae75117ef1d1cb300d8ea497654e57b8b957 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.043772 4832 scope.go:117] "RemoveContainer" containerID="cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.044029 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd\": container with ID starting with cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd not found: ID does not exist" containerID="cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.044052 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd"} err="failed to get container status \"cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd\": rpc error: code = NotFound desc = could not find container \"cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd\": container with ID starting with cf6c92c22df95d17ce6af9f4645de9ad91fee9ebd421f46b7922fa48367d2edd not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.044065 4832 scope.go:117] "RemoveContainer" containerID="dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.044337 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7\": container with ID starting with dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7 not found: ID does not exist" containerID="dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.044367 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7"} err="failed to get container status \"dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7\": rpc error: code = NotFound desc = could not find container \"dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7\": container with ID starting with dedb4b224557e8c6dcb50187983b9cde8ee400525e89ec4509e16bed23a060e7 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.044388 4832 scope.go:117] "RemoveContainer" containerID="9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.082135 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7pf4z"] Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.088533 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7pf4z"] Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.093915 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ph4hc"] Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.095767 4832 scope.go:117] "RemoveContainer" containerID="f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.098704 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ph4hc"] Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.107690 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xql8n"] Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.116888 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xql8n"] Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.117285 4832 scope.go:117] "RemoveContainer" containerID="122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.134826 4832 scope.go:117] "RemoveContainer" containerID="9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.135265 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88\": container with ID starting with 9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88 not found: ID does not exist" containerID="9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.135308 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88"} err="failed to get container status \"9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88\": rpc error: code = NotFound desc = could not find container \"9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88\": container with ID starting with 9b810b3cf14274c3d6bc8412c6291f40bbc22e26a32c72ba85971c3bac819f88 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.135341 4832 scope.go:117] "RemoveContainer" containerID="f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.135764 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234\": container with ID starting with f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234 not found: ID does not exist" containerID="f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.135789 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234"} err="failed to get container status \"f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234\": rpc error: code = NotFound desc = could not find container \"f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234\": container with ID starting with f3c91d99359cee44595de7b283093ca848096bb1cfc1fd7c8f390f26879ae234 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.135803 4832 scope.go:117] "RemoveContainer" containerID="122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.136089 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535\": container with ID starting with 122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535 not found: ID does not exist" containerID="122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.136105 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535"} err="failed to get container status \"122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535\": rpc error: code = NotFound desc = could not find container \"122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535\": container with ID starting with 122b2597d92aaada6706185c60a4eab8f06b27b00fd29cf49332496c5799e535 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.136117 4832 scope.go:117] "RemoveContainer" containerID="20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.154720 4832 scope.go:117] "RemoveContainer" containerID="d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.167657 4832 scope.go:117] "RemoveContainer" containerID="86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.181750 4832 scope.go:117] "RemoveContainer" containerID="20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.182269 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7\": container with ID starting with 20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7 not found: ID does not exist" containerID="20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.182298 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7"} err="failed to get container status \"20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7\": rpc error: code = NotFound desc = could not find container \"20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7\": container with ID starting with 20b30a02905f2cf95243905236e40315a443ed4e35d7ca40bc5a17a153a52fb7 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.182324 4832 scope.go:117] "RemoveContainer" containerID="d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.182796 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81\": container with ID starting with d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81 not found: ID does not exist" containerID="d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.182859 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81"} err="failed to get container status \"d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81\": rpc error: code = NotFound desc = could not find container \"d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81\": container with ID starting with d91cdd0e71a9387d16a0a9d1df52b7aef12a2ca0e8c099dfec9a43dc30035e81 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.182901 4832 scope.go:117] "RemoveContainer" containerID="86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.183460 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542\": container with ID starting with 86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542 not found: ID does not exist" containerID="86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.183487 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542"} err="failed to get container status \"86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542\": rpc error: code = NotFound desc = could not find container \"86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542\": container with ID starting with 86d14856c32f91aee9c37a944122a4f7260eb100f1442ba78f3511d2d1f35542 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.183520 4832 scope.go:117] "RemoveContainer" containerID="b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.195216 4832 scope.go:117] "RemoveContainer" containerID="c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.211208 4832 scope.go:117] "RemoveContainer" containerID="575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.229948 4832 scope.go:117] "RemoveContainer" containerID="b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.230461 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee\": container with ID starting with b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee not found: ID does not exist" containerID="b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.230525 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee"} err="failed to get container status \"b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee\": rpc error: code = NotFound desc = could not find container \"b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee\": container with ID starting with b15d8e0d138920b0a631005242ed3ecd61b4d2eefd94016f80e3acd48314e6ee not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.230557 4832 scope.go:117] "RemoveContainer" containerID="c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.230849 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d\": container with ID starting with c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d not found: ID does not exist" containerID="c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.230877 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d"} err="failed to get container status \"c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d\": rpc error: code = NotFound desc = could not find container \"c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d\": container with ID starting with c5d3e6355ad4f53cc49a2bb3c9fa291abe55db7a9d285ae109d61fa0a544569d not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.230897 4832 scope.go:117] "RemoveContainer" containerID="575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.231363 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec\": container with ID starting with 575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec not found: ID does not exist" containerID="575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.231424 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec"} err="failed to get container status \"575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec\": rpc error: code = NotFound desc = could not find container \"575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec\": container with ID starting with 575078461d28e4e1c05880bd7b15db3d39d6d03f467859b2ded519f19342a5ec not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.231451 4832 scope.go:117] "RemoveContainer" containerID="fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.243383 4832 scope.go:117] "RemoveContainer" containerID="fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8" Mar 12 14:54:41 crc kubenswrapper[4832]: E0312 14:54:41.243774 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8\": container with ID starting with fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8 not found: ID does not exist" containerID="fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.243828 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8"} err="failed to get container status \"fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8\": rpc error: code = NotFound desc = could not find container \"fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8\": container with ID starting with fe3982da7b728fa4f445f193ad50d7f3b9e91bb61e73838a0e014d3e3ae44ff8 not found: ID does not exist" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.942991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" event={"ID":"1fe789bb-6979-470b-96cc-e07a65463ecb","Type":"ContainerStarted","Data":"032f3dc03a3937d7b08416d4f7dea3dae3ba580e1d30309b7ba21c7d38bbabe9"} Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.943048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" event={"ID":"1fe789bb-6979-470b-96cc-e07a65463ecb","Type":"ContainerStarted","Data":"62ddcda09c31630619ce82f9914d4a1573735b35b88cf5b5a15a12a646e6449b"} Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.943271 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.946814 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" Mar 12 14:54:41 crc kubenswrapper[4832]: I0312 14:54:41.960753 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2p9wx" podStartSLOduration=2.960740183 podStartE2EDuration="2.960740183s" podCreationTimestamp="2026-03-12 14:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:54:41.958660114 +0000 UTC m=+440.602674350" watchObservedRunningTime="2026-03-12 14:54:41.960740183 +0000 UTC m=+440.604754409" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037040 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gd9lm"] Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037306 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037320 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037333 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037339 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037347 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037354 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037366 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037372 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037380 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037386 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037396 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037402 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="extract-utilities" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037412 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037419 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037427 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerName="marketplace-operator" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037433 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerName="marketplace-operator" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037441 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037446 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037455 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037461 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037471 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037479 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="extract-content" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037489 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037495 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: E0312 14:54:42.037519 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037527 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037609 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" containerName="marketplace-operator" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037620 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037631 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="986f5b8c-a467-455c-9b4c-e53572535143" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037638 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.037646 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="17368088-aec0-4319-8575-045b54487a1f" containerName="registry-server" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.038454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.040041 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.046384 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gd9lm"] Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.233201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-catalog-content\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.233257 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-utilities\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.233290 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xnb\" (UniqueName: \"kubernetes.io/projected/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-kube-api-access-g4xnb\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.238009 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jgllk"] Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.238920 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.240441 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.248177 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgllk"] Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.334561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-catalog-content\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.334620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-utilities\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.334659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xnb\" (UniqueName: \"kubernetes.io/projected/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-kube-api-access-g4xnb\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.335083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-catalog-content\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.335226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-utilities\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.352249 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xnb\" (UniqueName: \"kubernetes.io/projected/736a2adc-4c77-4a76-8fb3-a2c008cb8b6b-kube-api-access-g4xnb\") pod \"redhat-marketplace-gd9lm\" (UID: \"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b\") " pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.356730 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.435663 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-utilities\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.435726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-catalog-content\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.435809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x8zg\" (UniqueName: \"kubernetes.io/projected/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-kube-api-access-4x8zg\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.536644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x8zg\" (UniqueName: \"kubernetes.io/projected/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-kube-api-access-4x8zg\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.536973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-utilities\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.537002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-catalog-content\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.537606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-catalog-content\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.537606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-utilities\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.553519 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x8zg\" (UniqueName: \"kubernetes.io/projected/50f84e8c-fd56-4ff8-94de-906f7ed10a0e-kube-api-access-4x8zg\") pod \"certified-operators-jgllk\" (UID: \"50f84e8c-fd56-4ff8-94de-906f7ed10a0e\") " pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.554173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.649913 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17368088-aec0-4319-8575-045b54487a1f" path="/var/lib/kubelet/pods/17368088-aec0-4319-8575-045b54487a1f/volumes" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.650820 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31435028-adc4-4b77-85d3-5d7659cd80f0" path="/var/lib/kubelet/pods/31435028-adc4-4b77-85d3-5d7659cd80f0/volumes" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.651813 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdc1c63-8a73-405f-aede-75834651cccc" path="/var/lib/kubelet/pods/7fdc1c63-8a73-405f-aede-75834651cccc/volumes" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.653882 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945e7b12-b4c7-45e9-9956-6dda3eed3c62" path="/var/lib/kubelet/pods/945e7b12-b4c7-45e9-9956-6dda3eed3c62/volumes" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.654630 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986f5b8c-a467-455c-9b4c-e53572535143" path="/var/lib/kubelet/pods/986f5b8c-a467-455c-9b4c-e53572535143/volumes" Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.745576 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gd9lm"] Mar 12 14:54:42 crc kubenswrapper[4832]: W0312 14:54:42.745816 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736a2adc_4c77_4a76_8fb3_a2c008cb8b6b.slice/crio-27a195d5fdd6e039e70fa5e0611b17357c21f12a629c9f5bb768e9caeef57257 WatchSource:0}: Error finding container 27a195d5fdd6e039e70fa5e0611b17357c21f12a629c9f5bb768e9caeef57257: Status 404 returned error can't find the container with id 27a195d5fdd6e039e70fa5e0611b17357c21f12a629c9f5bb768e9caeef57257 Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.928910 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgllk"] Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.949982 4832 generic.go:334] "Generic (PLEG): container finished" podID="736a2adc-4c77-4a76-8fb3-a2c008cb8b6b" containerID="11dff2bc96e07dfafc5bf8dc7dbb5eabd246e51106db2e9f6a055726f564bdcf" exitCode=0 Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.950076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd9lm" event={"ID":"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b","Type":"ContainerDied","Data":"11dff2bc96e07dfafc5bf8dc7dbb5eabd246e51106db2e9f6a055726f564bdcf"} Mar 12 14:54:42 crc kubenswrapper[4832]: I0312 14:54:42.950109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd9lm" event={"ID":"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b","Type":"ContainerStarted","Data":"27a195d5fdd6e039e70fa5e0611b17357c21f12a629c9f5bb768e9caeef57257"} Mar 12 14:54:42 crc kubenswrapper[4832]: W0312 14:54:42.953297 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f84e8c_fd56_4ff8_94de_906f7ed10a0e.slice/crio-2c6b6c7a84ecdbe4b3068331c06440d01b73870d2f5c56b456c7f560efede467 WatchSource:0}: Error finding container 2c6b6c7a84ecdbe4b3068331c06440d01b73870d2f5c56b456c7f560efede467: Status 404 returned error can't find the container with id 2c6b6c7a84ecdbe4b3068331c06440d01b73870d2f5c56b456c7f560efede467 Mar 12 14:54:43 crc kubenswrapper[4832]: I0312 14:54:43.961708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd9lm" event={"ID":"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b","Type":"ContainerStarted","Data":"8220e70ab746df8e81c1121e3eb231ee09034ecb625e984c77786dca65d28b44"} Mar 12 14:54:43 crc kubenswrapper[4832]: I0312 14:54:43.964733 4832 generic.go:334] "Generic (PLEG): container finished" podID="50f84e8c-fd56-4ff8-94de-906f7ed10a0e" containerID="98581f05645c622afaba2cf4ceb3705e486de894be165fd60577e7d3502120cf" exitCode=0 Mar 12 14:54:43 crc kubenswrapper[4832]: I0312 14:54:43.964775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgllk" event={"ID":"50f84e8c-fd56-4ff8-94de-906f7ed10a0e","Type":"ContainerDied","Data":"98581f05645c622afaba2cf4ceb3705e486de894be165fd60577e7d3502120cf"} Mar 12 14:54:43 crc kubenswrapper[4832]: I0312 14:54:43.964802 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgllk" event={"ID":"50f84e8c-fd56-4ff8-94de-906f7ed10a0e","Type":"ContainerStarted","Data":"2c6b6c7a84ecdbe4b3068331c06440d01b73870d2f5c56b456c7f560efede467"} Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.438948 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qdms9"] Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.440127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.442257 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.455112 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdms9"] Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.562950 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-utilities\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.563064 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-catalog-content\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.563214 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fsn4\" (UniqueName: \"kubernetes.io/projected/ce99647c-4c5e-4b42-99cc-814f7db9212b-kube-api-access-2fsn4\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.642148 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6tx8p"] Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.643084 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.644961 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.654308 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tx8p"] Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.664589 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmnh\" (UniqueName: \"kubernetes.io/projected/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-kube-api-access-glmnh\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.664707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fsn4\" (UniqueName: \"kubernetes.io/projected/ce99647c-4c5e-4b42-99cc-814f7db9212b-kube-api-access-2fsn4\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.664759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-utilities\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.664815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-catalog-content\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.664843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-utilities\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.664875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-catalog-content\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.665613 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-utilities\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.665668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-catalog-content\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.708198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fsn4\" (UniqueName: \"kubernetes.io/projected/ce99647c-4c5e-4b42-99cc-814f7db9212b-kube-api-access-2fsn4\") pod \"redhat-operators-qdms9\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.756951 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.765490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-catalog-content\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.765540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-utilities\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.765598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmnh\" (UniqueName: \"kubernetes.io/projected/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-kube-api-access-glmnh\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.765868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-catalog-content\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.766194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-utilities\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.784210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmnh\" (UniqueName: \"kubernetes.io/projected/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-kube-api-access-glmnh\") pod \"community-operators-6tx8p\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.971825 4832 generic.go:334] "Generic (PLEG): container finished" podID="50f84e8c-fd56-4ff8-94de-906f7ed10a0e" containerID="feec7c14c04099c002a17b5c401e7b231d49fa4a961c9ac17ffc93dd1a09e4b4" exitCode=0 Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.971913 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgllk" event={"ID":"50f84e8c-fd56-4ff8-94de-906f7ed10a0e","Type":"ContainerDied","Data":"feec7c14c04099c002a17b5c401e7b231d49fa4a961c9ac17ffc93dd1a09e4b4"} Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.974050 4832 generic.go:334] "Generic (PLEG): container finished" podID="736a2adc-4c77-4a76-8fb3-a2c008cb8b6b" containerID="8220e70ab746df8e81c1121e3eb231ee09034ecb625e984c77786dca65d28b44" exitCode=0 Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.974077 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd9lm" event={"ID":"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b","Type":"ContainerDied","Data":"8220e70ab746df8e81c1121e3eb231ee09034ecb625e984c77786dca65d28b44"} Mar 12 14:54:44 crc kubenswrapper[4832]: I0312 14:54:44.993349 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.161342 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdms9"] Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.384563 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tx8p"] Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.980221 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerID="a36ad98b42aa7fac269d9eb1e8f8652ad98bf666ea0bb504ba5fb13647ecdc2c" exitCode=0 Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.980640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tx8p" event={"ID":"2e1e3ef9-c646-4657-bbf7-009a8f0528e8","Type":"ContainerDied","Data":"a36ad98b42aa7fac269d9eb1e8f8652ad98bf666ea0bb504ba5fb13647ecdc2c"} Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.980774 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tx8p" event={"ID":"2e1e3ef9-c646-4657-bbf7-009a8f0528e8","Type":"ContainerStarted","Data":"4fbc89a00db8de1507c26666ef4a2cb10425cebb8104c226158428e710909688"} Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.983613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd9lm" event={"ID":"736a2adc-4c77-4a76-8fb3-a2c008cb8b6b","Type":"ContainerStarted","Data":"757721ca07dbed153c25685a64058be5b15b584f57c752448d7972a15e81742d"} Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.991180 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgllk" event={"ID":"50f84e8c-fd56-4ff8-94de-906f7ed10a0e","Type":"ContainerStarted","Data":"01548d2ca2770d1b1e81d115f101fd67baf9d42dde164022a00c4cf7a24f9109"} Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.992368 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerID="afecced00b0206bc256623e9272867721a2f78947950d7cfc924672bd6c01458" exitCode=0 Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.992407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdms9" event={"ID":"ce99647c-4c5e-4b42-99cc-814f7db9212b","Type":"ContainerDied","Data":"afecced00b0206bc256623e9272867721a2f78947950d7cfc924672bd6c01458"} Mar 12 14:54:45 crc kubenswrapper[4832]: I0312 14:54:45.992448 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdms9" event={"ID":"ce99647c-4c5e-4b42-99cc-814f7db9212b","Type":"ContainerStarted","Data":"e73abc5e010470ecb6742f831648eca178c8a49a905042d5ba3d06424c599741"} Mar 12 14:54:46 crc kubenswrapper[4832]: I0312 14:54:46.018975 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jgllk" podStartSLOduration=2.551821146 podStartE2EDuration="4.018951997s" podCreationTimestamp="2026-03-12 14:54:42 +0000 UTC" firstStartedPulling="2026-03-12 14:54:43.965777755 +0000 UTC m=+442.609791971" lastFinishedPulling="2026-03-12 14:54:45.432908596 +0000 UTC m=+444.076922822" observedRunningTime="2026-03-12 14:54:46.016458736 +0000 UTC m=+444.660472972" watchObservedRunningTime="2026-03-12 14:54:46.018951997 +0000 UTC m=+444.662966223" Mar 12 14:54:46 crc kubenswrapper[4832]: I0312 14:54:46.052550 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gd9lm" podStartSLOduration=1.6225740549999998 podStartE2EDuration="4.052531293s" podCreationTimestamp="2026-03-12 14:54:42 +0000 UTC" firstStartedPulling="2026-03-12 14:54:42.951382351 +0000 UTC m=+441.595396577" lastFinishedPulling="2026-03-12 14:54:45.381339589 +0000 UTC m=+444.025353815" observedRunningTime="2026-03-12 14:54:46.031952367 +0000 UTC m=+444.675966593" watchObservedRunningTime="2026-03-12 14:54:46.052531293 +0000 UTC m=+444.696545519" Mar 12 14:54:48 crc kubenswrapper[4832]: I0312 14:54:48.003594 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdms9" event={"ID":"ce99647c-4c5e-4b42-99cc-814f7db9212b","Type":"ContainerStarted","Data":"7e2dc7eb452ae6840086071cbe157da12b7d5be43f5d096dc071bb47160e7965"} Mar 12 14:54:48 crc kubenswrapper[4832]: I0312 14:54:48.005521 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerID="2cc7a363533470d46543bf015a846808b7001b14481a3309a113cd6182606cef" exitCode=0 Mar 12 14:54:48 crc kubenswrapper[4832]: I0312 14:54:48.005547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tx8p" event={"ID":"2e1e3ef9-c646-4657-bbf7-009a8f0528e8","Type":"ContainerDied","Data":"2cc7a363533470d46543bf015a846808b7001b14481a3309a113cd6182606cef"} Mar 12 14:54:49 crc kubenswrapper[4832]: I0312 14:54:49.010363 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerID="7e2dc7eb452ae6840086071cbe157da12b7d5be43f5d096dc071bb47160e7965" exitCode=0 Mar 12 14:54:49 crc kubenswrapper[4832]: I0312 14:54:49.010739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdms9" event={"ID":"ce99647c-4c5e-4b42-99cc-814f7db9212b","Type":"ContainerDied","Data":"7e2dc7eb452ae6840086071cbe157da12b7d5be43f5d096dc071bb47160e7965"} Mar 12 14:54:49 crc kubenswrapper[4832]: I0312 14:54:49.015182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tx8p" event={"ID":"2e1e3ef9-c646-4657-bbf7-009a8f0528e8","Type":"ContainerStarted","Data":"93ecff1fe3819d3c9354b7cee254a7f2992fcc41d9c1b8e99d6eeca1aaa66d3f"} Mar 12 14:54:49 crc kubenswrapper[4832]: I0312 14:54:49.043276 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6tx8p" podStartSLOduration=2.611571745 podStartE2EDuration="5.043260813s" podCreationTimestamp="2026-03-12 14:54:44 +0000 UTC" firstStartedPulling="2026-03-12 14:54:45.982463068 +0000 UTC m=+444.626477304" lastFinishedPulling="2026-03-12 14:54:48.414152116 +0000 UTC m=+447.058166372" observedRunningTime="2026-03-12 14:54:49.041199154 +0000 UTC m=+447.685213400" watchObservedRunningTime="2026-03-12 14:54:49.043260813 +0000 UTC m=+447.687275039" Mar 12 14:54:50 crc kubenswrapper[4832]: I0312 14:54:50.024279 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdms9" event={"ID":"ce99647c-4c5e-4b42-99cc-814f7db9212b","Type":"ContainerStarted","Data":"ff1592e00a63219bbaf6dd2453513c756d529e9652c4c66c1961f1133477dd5e"} Mar 12 14:54:50 crc kubenswrapper[4832]: I0312 14:54:50.046118 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qdms9" podStartSLOduration=2.656243617 podStartE2EDuration="6.046092187s" podCreationTimestamp="2026-03-12 14:54:44 +0000 UTC" firstStartedPulling="2026-03-12 14:54:45.993602995 +0000 UTC m=+444.637617221" lastFinishedPulling="2026-03-12 14:54:49.383451555 +0000 UTC m=+448.027465791" observedRunningTime="2026-03-12 14:54:50.043576556 +0000 UTC m=+448.687590822" watchObservedRunningTime="2026-03-12 14:54:50.046092187 +0000 UTC m=+448.690106423" Mar 12 14:54:52 crc kubenswrapper[4832]: I0312 14:54:52.357815 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:52 crc kubenswrapper[4832]: I0312 14:54:52.357870 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:52 crc kubenswrapper[4832]: I0312 14:54:52.407804 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:52 crc kubenswrapper[4832]: I0312 14:54:52.555830 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:52 crc kubenswrapper[4832]: I0312 14:54:52.555871 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:52 crc kubenswrapper[4832]: I0312 14:54:52.595554 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:53 crc kubenswrapper[4832]: I0312 14:54:53.080330 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gd9lm" Mar 12 14:54:53 crc kubenswrapper[4832]: I0312 14:54:53.093711 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jgllk" Mar 12 14:54:54 crc kubenswrapper[4832]: I0312 14:54:54.758016 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:54 crc kubenswrapper[4832]: I0312 14:54:54.758958 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:54:54 crc kubenswrapper[4832]: I0312 14:54:54.994435 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:54 crc kubenswrapper[4832]: I0312 14:54:54.994558 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:55 crc kubenswrapper[4832]: I0312 14:54:55.056152 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:55 crc kubenswrapper[4832]: I0312 14:54:55.104333 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 14:54:55 crc kubenswrapper[4832]: I0312 14:54:55.798748 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qdms9" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="registry-server" probeResult="failure" output=< Mar 12 14:54:55 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Mar 12 14:54:55 crc kubenswrapper[4832]: > Mar 12 14:54:56 crc kubenswrapper[4832]: I0312 14:54:56.314494 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:54:56 crc kubenswrapper[4832]: I0312 14:54:56.314633 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:55:04 crc kubenswrapper[4832]: I0312 14:55:04.813314 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:55:04 crc kubenswrapper[4832]: I0312 14:55:04.860479 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 14:55:26 crc kubenswrapper[4832]: I0312 14:55:26.313743 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:55:26 crc kubenswrapper[4832]: I0312 14:55:26.314279 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:55:26 crc kubenswrapper[4832]: I0312 14:55:26.314332 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:55:26 crc kubenswrapper[4832]: I0312 14:55:26.314913 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db1ee485f07778922ad94a6aead05c59f51b934d1a75207bc586280604f97237"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:55:26 crc kubenswrapper[4832]: I0312 14:55:26.314971 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://db1ee485f07778922ad94a6aead05c59f51b934d1a75207bc586280604f97237" gracePeriod=600 Mar 12 14:55:27 crc kubenswrapper[4832]: I0312 14:55:27.226785 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="db1ee485f07778922ad94a6aead05c59f51b934d1a75207bc586280604f97237" exitCode=0 Mar 12 14:55:27 crc kubenswrapper[4832]: I0312 14:55:27.226857 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"db1ee485f07778922ad94a6aead05c59f51b934d1a75207bc586280604f97237"} Mar 12 14:55:27 crc kubenswrapper[4832]: I0312 14:55:27.228077 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"a6b3b7a564b25e88f4a641ce57b7d20bb5eb79ceb00743653cfa2463d685ad6c"} Mar 12 14:55:27 crc kubenswrapper[4832]: I0312 14:55:27.228116 4832 scope.go:117] "RemoveContainer" containerID="f2c42b1c23c2acac1a05003e08e6849816d0516a0c76f2418501169bc5233d70" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.141540 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555456-ccz8p"] Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.142840 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-ccz8p" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.145661 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.145901 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.146691 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.151280 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-ccz8p"] Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.261107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqk6\" (UniqueName: \"kubernetes.io/projected/b0975370-5c54-4fb7-a1ce-f032ad5085c2-kube-api-access-mpqk6\") pod \"auto-csr-approver-29555456-ccz8p\" (UID: \"b0975370-5c54-4fb7-a1ce-f032ad5085c2\") " pod="openshift-infra/auto-csr-approver-29555456-ccz8p" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.363358 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpqk6\" (UniqueName: \"kubernetes.io/projected/b0975370-5c54-4fb7-a1ce-f032ad5085c2-kube-api-access-mpqk6\") pod \"auto-csr-approver-29555456-ccz8p\" (UID: \"b0975370-5c54-4fb7-a1ce-f032ad5085c2\") " pod="openshift-infra/auto-csr-approver-29555456-ccz8p" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.393091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpqk6\" (UniqueName: \"kubernetes.io/projected/b0975370-5c54-4fb7-a1ce-f032ad5085c2-kube-api-access-mpqk6\") pod \"auto-csr-approver-29555456-ccz8p\" (UID: \"b0975370-5c54-4fb7-a1ce-f032ad5085c2\") " pod="openshift-infra/auto-csr-approver-29555456-ccz8p" Mar 12 14:56:00 crc kubenswrapper[4832]: I0312 14:56:00.462635 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-ccz8p" Mar 12 14:56:01 crc kubenswrapper[4832]: I0312 14:56:01.246925 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-ccz8p"] Mar 12 14:56:01 crc kubenswrapper[4832]: I0312 14:56:01.255881 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:56:01 crc kubenswrapper[4832]: I0312 14:56:01.439227 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555456-ccz8p" event={"ID":"b0975370-5c54-4fb7-a1ce-f032ad5085c2","Type":"ContainerStarted","Data":"af5ffb25d6b12da811de1d7ea5888c2939bf603016a90941427d9a356865c9f2"} Mar 12 14:56:03 crc kubenswrapper[4832]: I0312 14:56:03.450387 4832 generic.go:334] "Generic (PLEG): container finished" podID="b0975370-5c54-4fb7-a1ce-f032ad5085c2" containerID="71e5cb51397ab4f5704e9b39092720b500b0a02f7521b9d447435dff3e9844fe" exitCode=0 Mar 12 14:56:03 crc kubenswrapper[4832]: I0312 14:56:03.450446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555456-ccz8p" event={"ID":"b0975370-5c54-4fb7-a1ce-f032ad5085c2","Type":"ContainerDied","Data":"71e5cb51397ab4f5704e9b39092720b500b0a02f7521b9d447435dff3e9844fe"} Mar 12 14:56:04 crc kubenswrapper[4832]: I0312 14:56:04.684454 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-ccz8p" Mar 12 14:56:04 crc kubenswrapper[4832]: I0312 14:56:04.814795 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpqk6\" (UniqueName: \"kubernetes.io/projected/b0975370-5c54-4fb7-a1ce-f032ad5085c2-kube-api-access-mpqk6\") pod \"b0975370-5c54-4fb7-a1ce-f032ad5085c2\" (UID: \"b0975370-5c54-4fb7-a1ce-f032ad5085c2\") " Mar 12 14:56:04 crc kubenswrapper[4832]: I0312 14:56:04.820026 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0975370-5c54-4fb7-a1ce-f032ad5085c2-kube-api-access-mpqk6" (OuterVolumeSpecName: "kube-api-access-mpqk6") pod "b0975370-5c54-4fb7-a1ce-f032ad5085c2" (UID: "b0975370-5c54-4fb7-a1ce-f032ad5085c2"). InnerVolumeSpecName "kube-api-access-mpqk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:56:04 crc kubenswrapper[4832]: I0312 14:56:04.915977 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpqk6\" (UniqueName: \"kubernetes.io/projected/b0975370-5c54-4fb7-a1ce-f032ad5085c2-kube-api-access-mpqk6\") on node \"crc\" DevicePath \"\"" Mar 12 14:56:05 crc kubenswrapper[4832]: I0312 14:56:05.464649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555456-ccz8p" event={"ID":"b0975370-5c54-4fb7-a1ce-f032ad5085c2","Type":"ContainerDied","Data":"af5ffb25d6b12da811de1d7ea5888c2939bf603016a90941427d9a356865c9f2"} Mar 12 14:56:05 crc kubenswrapper[4832]: I0312 14:56:05.464712 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5ffb25d6b12da811de1d7ea5888c2939bf603016a90941427d9a356865c9f2" Mar 12 14:56:05 crc kubenswrapper[4832]: I0312 14:56:05.464738 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-ccz8p" Mar 12 14:56:05 crc kubenswrapper[4832]: I0312 14:56:05.764837 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-w9jtm"] Mar 12 14:56:05 crc kubenswrapper[4832]: I0312 14:56:05.770455 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-w9jtm"] Mar 12 14:56:06 crc kubenswrapper[4832]: I0312 14:56:06.627268 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35da9b9e-133b-4d7f-a32e-908d9fc7734b" path="/var/lib/kubelet/pods/35da9b9e-133b-4d7f-a32e-908d9fc7734b/volumes" Mar 12 14:57:26 crc kubenswrapper[4832]: I0312 14:57:26.314409 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:57:26 crc kubenswrapper[4832]: I0312 14:57:26.315372 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:57:56 crc kubenswrapper[4832]: I0312 14:57:56.314004 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:57:56 crc kubenswrapper[4832]: I0312 14:57:56.314803 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.149647 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555458-n99rk"] Mar 12 14:58:00 crc kubenswrapper[4832]: E0312 14:58:00.150364 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0975370-5c54-4fb7-a1ce-f032ad5085c2" containerName="oc" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.150391 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0975370-5c54-4fb7-a1ce-f032ad5085c2" containerName="oc" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.150636 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0975370-5c54-4fb7-a1ce-f032ad5085c2" containerName="oc" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.151430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-n99rk" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.154418 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.154474 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.155095 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.168117 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-n99rk"] Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.208802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sp8f\" (UniqueName: \"kubernetes.io/projected/046f9728-41b1-4ad1-848b-4f34b021e284-kube-api-access-7sp8f\") pod \"auto-csr-approver-29555458-n99rk\" (UID: \"046f9728-41b1-4ad1-848b-4f34b021e284\") " pod="openshift-infra/auto-csr-approver-29555458-n99rk" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.310234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sp8f\" (UniqueName: \"kubernetes.io/projected/046f9728-41b1-4ad1-848b-4f34b021e284-kube-api-access-7sp8f\") pod \"auto-csr-approver-29555458-n99rk\" (UID: \"046f9728-41b1-4ad1-848b-4f34b021e284\") " pod="openshift-infra/auto-csr-approver-29555458-n99rk" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.331925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sp8f\" (UniqueName: \"kubernetes.io/projected/046f9728-41b1-4ad1-848b-4f34b021e284-kube-api-access-7sp8f\") pod \"auto-csr-approver-29555458-n99rk\" (UID: \"046f9728-41b1-4ad1-848b-4f34b021e284\") " pod="openshift-infra/auto-csr-approver-29555458-n99rk" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.470965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-n99rk" Mar 12 14:58:00 crc kubenswrapper[4832]: I0312 14:58:00.898536 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-n99rk"] Mar 12 14:58:01 crc kubenswrapper[4832]: I0312 14:58:01.124911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555458-n99rk" event={"ID":"046f9728-41b1-4ad1-848b-4f34b021e284","Type":"ContainerStarted","Data":"ccb7aa12d33e71c03929245a4a961023b118bc9879c96c6bd825c0d8158f5be5"} Mar 12 14:58:03 crc kubenswrapper[4832]: I0312 14:58:03.136113 4832 generic.go:334] "Generic (PLEG): container finished" podID="046f9728-41b1-4ad1-848b-4f34b021e284" containerID="93932f73c89129855c3402f73985d66772d09b93854877301c0e466bbc9ef912" exitCode=0 Mar 12 14:58:03 crc kubenswrapper[4832]: I0312 14:58:03.136194 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555458-n99rk" event={"ID":"046f9728-41b1-4ad1-848b-4f34b021e284","Type":"ContainerDied","Data":"93932f73c89129855c3402f73985d66772d09b93854877301c0e466bbc9ef912"} Mar 12 14:58:04 crc kubenswrapper[4832]: I0312 14:58:04.381640 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-n99rk" Mar 12 14:58:04 crc kubenswrapper[4832]: I0312 14:58:04.460424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sp8f\" (UniqueName: \"kubernetes.io/projected/046f9728-41b1-4ad1-848b-4f34b021e284-kube-api-access-7sp8f\") pod \"046f9728-41b1-4ad1-848b-4f34b021e284\" (UID: \"046f9728-41b1-4ad1-848b-4f34b021e284\") " Mar 12 14:58:04 crc kubenswrapper[4832]: I0312 14:58:04.465302 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046f9728-41b1-4ad1-848b-4f34b021e284-kube-api-access-7sp8f" (OuterVolumeSpecName: "kube-api-access-7sp8f") pod "046f9728-41b1-4ad1-848b-4f34b021e284" (UID: "046f9728-41b1-4ad1-848b-4f34b021e284"). InnerVolumeSpecName "kube-api-access-7sp8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:58:04 crc kubenswrapper[4832]: I0312 14:58:04.561619 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sp8f\" (UniqueName: \"kubernetes.io/projected/046f9728-41b1-4ad1-848b-4f34b021e284-kube-api-access-7sp8f\") on node \"crc\" DevicePath \"\"" Mar 12 14:58:05 crc kubenswrapper[4832]: I0312 14:58:05.156659 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555458-n99rk" event={"ID":"046f9728-41b1-4ad1-848b-4f34b021e284","Type":"ContainerDied","Data":"ccb7aa12d33e71c03929245a4a961023b118bc9879c96c6bd825c0d8158f5be5"} Mar 12 14:58:05 crc kubenswrapper[4832]: I0312 14:58:05.156703 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccb7aa12d33e71c03929245a4a961023b118bc9879c96c6bd825c0d8158f5be5" Mar 12 14:58:05 crc kubenswrapper[4832]: I0312 14:58:05.156724 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-n99rk" Mar 12 14:58:05 crc kubenswrapper[4832]: I0312 14:58:05.436342 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-gbgxl"] Mar 12 14:58:05 crc kubenswrapper[4832]: I0312 14:58:05.441198 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-gbgxl"] Mar 12 14:58:06 crc kubenswrapper[4832]: I0312 14:58:06.629275 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba41832-c6df-4b39-a585-a27d72b2d7fd" path="/var/lib/kubelet/pods/eba41832-c6df-4b39-a585-a27d72b2d7fd/volumes" Mar 12 14:58:26 crc kubenswrapper[4832]: I0312 14:58:26.315003 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:58:26 crc kubenswrapper[4832]: I0312 14:58:26.315688 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:58:26 crc kubenswrapper[4832]: I0312 14:58:26.315767 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 14:58:26 crc kubenswrapper[4832]: I0312 14:58:26.318140 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6b3b7a564b25e88f4a641ce57b7d20bb5eb79ceb00743653cfa2463d685ad6c"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:58:26 crc kubenswrapper[4832]: I0312 14:58:26.318311 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://a6b3b7a564b25e88f4a641ce57b7d20bb5eb79ceb00743653cfa2463d685ad6c" gracePeriod=600 Mar 12 14:58:27 crc kubenswrapper[4832]: I0312 14:58:27.306553 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="a6b3b7a564b25e88f4a641ce57b7d20bb5eb79ceb00743653cfa2463d685ad6c" exitCode=0 Mar 12 14:58:27 crc kubenswrapper[4832]: I0312 14:58:27.306686 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"a6b3b7a564b25e88f4a641ce57b7d20bb5eb79ceb00743653cfa2463d685ad6c"} Mar 12 14:58:27 crc kubenswrapper[4832]: I0312 14:58:27.307490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"cb2ae425dc6888cba35a19e88d8e8d25fe43467e7d813212653d18ffcd00311f"} Mar 12 14:58:27 crc kubenswrapper[4832]: I0312 14:58:27.307762 4832 scope.go:117] "RemoveContainer" containerID="db1ee485f07778922ad94a6aead05c59f51b934d1a75207bc586280604f97237" Mar 12 14:58:41 crc kubenswrapper[4832]: I0312 14:58:41.202675 4832 scope.go:117] "RemoveContainer" containerID="170b98eaee6906093b6abb700460c830fa3a2f8071efff79ef655720f6d81364" Mar 12 14:58:41 crc kubenswrapper[4832]: I0312 14:58:41.238058 4832 scope.go:117] "RemoveContainer" containerID="46394632e85d3cb84f266c578a9b706e26c0e47c333ccad5557a6b380f5c8c7c" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.137242 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555460-vd8s7"] Mar 12 15:00:00 crc kubenswrapper[4832]: E0312 15:00:00.138162 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046f9728-41b1-4ad1-848b-4f34b021e284" containerName="oc" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.138181 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="046f9728-41b1-4ad1-848b-4f34b021e284" containerName="oc" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.138349 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="046f9728-41b1-4ad1-848b-4f34b021e284" containerName="oc" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.138917 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-vd8s7" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.145630 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.145857 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.145898 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.145988 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9"] Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.146968 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.150774 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.151453 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.153492 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9"] Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.163661 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-vd8s7"] Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.231469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683ab2e5-03a0-46dd-87df-0d785d36f2d1-config-volume\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.231564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgr84\" (UniqueName: \"kubernetes.io/projected/269dd911-a1c0-453a-ac9b-0b235541e941-kube-api-access-wgr84\") pod \"auto-csr-approver-29555460-vd8s7\" (UID: \"269dd911-a1c0-453a-ac9b-0b235541e941\") " pod="openshift-infra/auto-csr-approver-29555460-vd8s7" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.231594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rq79\" (UniqueName: \"kubernetes.io/projected/683ab2e5-03a0-46dd-87df-0d785d36f2d1-kube-api-access-4rq79\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.231632 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683ab2e5-03a0-46dd-87df-0d785d36f2d1-secret-volume\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.332771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683ab2e5-03a0-46dd-87df-0d785d36f2d1-config-volume\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.332820 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgr84\" (UniqueName: \"kubernetes.io/projected/269dd911-a1c0-453a-ac9b-0b235541e941-kube-api-access-wgr84\") pod \"auto-csr-approver-29555460-vd8s7\" (UID: \"269dd911-a1c0-453a-ac9b-0b235541e941\") " pod="openshift-infra/auto-csr-approver-29555460-vd8s7" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.333041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rq79\" (UniqueName: \"kubernetes.io/projected/683ab2e5-03a0-46dd-87df-0d785d36f2d1-kube-api-access-4rq79\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.333083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683ab2e5-03a0-46dd-87df-0d785d36f2d1-secret-volume\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.333810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683ab2e5-03a0-46dd-87df-0d785d36f2d1-config-volume\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.338970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683ab2e5-03a0-46dd-87df-0d785d36f2d1-secret-volume\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.349341 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rq79\" (UniqueName: \"kubernetes.io/projected/683ab2e5-03a0-46dd-87df-0d785d36f2d1-kube-api-access-4rq79\") pod \"collect-profiles-29555460-49rh9\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.351638 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgr84\" (UniqueName: \"kubernetes.io/projected/269dd911-a1c0-453a-ac9b-0b235541e941-kube-api-access-wgr84\") pod \"auto-csr-approver-29555460-vd8s7\" (UID: \"269dd911-a1c0-453a-ac9b-0b235541e941\") " pod="openshift-infra/auto-csr-approver-29555460-vd8s7" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.459908 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-vd8s7" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.472628 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.640636 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9"] Mar 12 15:00:00 crc kubenswrapper[4832]: W0312 15:00:00.654564 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod683ab2e5_03a0_46dd_87df_0d785d36f2d1.slice/crio-c6356ff739b44c8ce3c79b8ce9a1a959b3f9b81e4edc3ec6eee766132cc8dc3e WatchSource:0}: Error finding container c6356ff739b44c8ce3c79b8ce9a1a959b3f9b81e4edc3ec6eee766132cc8dc3e: Status 404 returned error can't find the container with id c6356ff739b44c8ce3c79b8ce9a1a959b3f9b81e4edc3ec6eee766132cc8dc3e Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.678391 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-vd8s7"] Mar 12 15:00:00 crc kubenswrapper[4832]: W0312 15:00:00.681194 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod269dd911_a1c0_453a_ac9b_0b235541e941.slice/crio-40987a7a62abe1e5c61abc1fefb3ea78a54620bcb8df4747f35f850d43b93f73 WatchSource:0}: Error finding container 40987a7a62abe1e5c61abc1fefb3ea78a54620bcb8df4747f35f850d43b93f73: Status 404 returned error can't find the container with id 40987a7a62abe1e5c61abc1fefb3ea78a54620bcb8df4747f35f850d43b93f73 Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.880163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" event={"ID":"683ab2e5-03a0-46dd-87df-0d785d36f2d1","Type":"ContainerStarted","Data":"ddb538c08c232b60e0e7d25c4bb64889f4d9a4045306cea03339db95d4b37a2a"} Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.880216 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" event={"ID":"683ab2e5-03a0-46dd-87df-0d785d36f2d1","Type":"ContainerStarted","Data":"c6356ff739b44c8ce3c79b8ce9a1a959b3f9b81e4edc3ec6eee766132cc8dc3e"} Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.880919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555460-vd8s7" event={"ID":"269dd911-a1c0-453a-ac9b-0b235541e941","Type":"ContainerStarted","Data":"40987a7a62abe1e5c61abc1fefb3ea78a54620bcb8df4747f35f850d43b93f73"} Mar 12 15:00:00 crc kubenswrapper[4832]: I0312 15:00:00.895656 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" podStartSLOduration=0.895636887 podStartE2EDuration="895.636887ms" podCreationTimestamp="2026-03-12 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:00:00.895338898 +0000 UTC m=+759.539353164" watchObservedRunningTime="2026-03-12 15:00:00.895636887 +0000 UTC m=+759.539651123" Mar 12 15:00:01 crc kubenswrapper[4832]: I0312 15:00:01.888615 4832 generic.go:334] "Generic (PLEG): container finished" podID="683ab2e5-03a0-46dd-87df-0d785d36f2d1" containerID="ddb538c08c232b60e0e7d25c4bb64889f4d9a4045306cea03339db95d4b37a2a" exitCode=0 Mar 12 15:00:01 crc kubenswrapper[4832]: I0312 15:00:01.888689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" event={"ID":"683ab2e5-03a0-46dd-87df-0d785d36f2d1","Type":"ContainerDied","Data":"ddb538c08c232b60e0e7d25c4bb64889f4d9a4045306cea03339db95d4b37a2a"} Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.162640 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.167607 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683ab2e5-03a0-46dd-87df-0d785d36f2d1-config-volume\") pod \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.167672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683ab2e5-03a0-46dd-87df-0d785d36f2d1-secret-volume\") pod \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.167701 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rq79\" (UniqueName: \"kubernetes.io/projected/683ab2e5-03a0-46dd-87df-0d785d36f2d1-kube-api-access-4rq79\") pod \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\" (UID: \"683ab2e5-03a0-46dd-87df-0d785d36f2d1\") " Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.168093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683ab2e5-03a0-46dd-87df-0d785d36f2d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "683ab2e5-03a0-46dd-87df-0d785d36f2d1" (UID: "683ab2e5-03a0-46dd-87df-0d785d36f2d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.176322 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683ab2e5-03a0-46dd-87df-0d785d36f2d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "683ab2e5-03a0-46dd-87df-0d785d36f2d1" (UID: "683ab2e5-03a0-46dd-87df-0d785d36f2d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.176783 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683ab2e5-03a0-46dd-87df-0d785d36f2d1-kube-api-access-4rq79" (OuterVolumeSpecName: "kube-api-access-4rq79") pod "683ab2e5-03a0-46dd-87df-0d785d36f2d1" (UID: "683ab2e5-03a0-46dd-87df-0d785d36f2d1"). InnerVolumeSpecName "kube-api-access-4rq79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.269007 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683ab2e5-03a0-46dd-87df-0d785d36f2d1-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.269067 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683ab2e5-03a0-46dd-87df-0d785d36f2d1-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.269081 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rq79\" (UniqueName: \"kubernetes.io/projected/683ab2e5-03a0-46dd-87df-0d785d36f2d1-kube-api-access-4rq79\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.908612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" event={"ID":"683ab2e5-03a0-46dd-87df-0d785d36f2d1","Type":"ContainerDied","Data":"c6356ff739b44c8ce3c79b8ce9a1a959b3f9b81e4edc3ec6eee766132cc8dc3e"} Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.908663 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6356ff739b44c8ce3c79b8ce9a1a959b3f9b81e4edc3ec6eee766132cc8dc3e" Mar 12 15:00:03 crc kubenswrapper[4832]: I0312 15:00:03.908705 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.155470 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4"] Mar 12 15:00:16 crc kubenswrapper[4832]: E0312 15:00:16.156216 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683ab2e5-03a0-46dd-87df-0d785d36f2d1" containerName="collect-profiles" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.156230 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="683ab2e5-03a0-46dd-87df-0d785d36f2d1" containerName="collect-profiles" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.156325 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="683ab2e5-03a0-46dd-87df-0d785d36f2d1" containerName="collect-profiles" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.156858 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.158915 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fwdlj" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.159718 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-wnbqc"] Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.160395 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.162391 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.162733 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wnbqc" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.163917 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4"] Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.166482 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7f9sr" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.171973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjn66\" (UniqueName: \"kubernetes.io/projected/3c02e749-9e6c-43c2-8aec-e8a4be5c1664-kube-api-access-rjn66\") pod \"cert-manager-cainjector-cf98fcc89-n6ng4\" (UID: \"3c02e749-9e6c-43c2-8aec-e8a4be5c1664\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.172152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr5sj\" (UniqueName: \"kubernetes.io/projected/c1a5c18d-2238-41d7-abe8-e5b6ddba52ba-kube-api-access-dr5sj\") pod \"cert-manager-858654f9db-wnbqc\" (UID: \"c1a5c18d-2238-41d7-abe8-e5b6ddba52ba\") " pod="cert-manager/cert-manager-858654f9db-wnbqc" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.176826 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wnbqc"] Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.184987 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mstfv"] Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.186111 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.188797 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-db8pt" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.194111 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mstfv"] Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.274349 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr5sj\" (UniqueName: \"kubernetes.io/projected/c1a5c18d-2238-41d7-abe8-e5b6ddba52ba-kube-api-access-dr5sj\") pod \"cert-manager-858654f9db-wnbqc\" (UID: \"c1a5c18d-2238-41d7-abe8-e5b6ddba52ba\") " pod="cert-manager/cert-manager-858654f9db-wnbqc" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.274457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wfp\" (UniqueName: \"kubernetes.io/projected/793aea64-41ee-4933-b96d-c95f08a1b554-kube-api-access-46wfp\") pod \"cert-manager-webhook-687f57d79b-mstfv\" (UID: \"793aea64-41ee-4933-b96d-c95f08a1b554\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.274531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjn66\" (UniqueName: \"kubernetes.io/projected/3c02e749-9e6c-43c2-8aec-e8a4be5c1664-kube-api-access-rjn66\") pod \"cert-manager-cainjector-cf98fcc89-n6ng4\" (UID: \"3c02e749-9e6c-43c2-8aec-e8a4be5c1664\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.298060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr5sj\" (UniqueName: \"kubernetes.io/projected/c1a5c18d-2238-41d7-abe8-e5b6ddba52ba-kube-api-access-dr5sj\") pod \"cert-manager-858654f9db-wnbqc\" (UID: \"c1a5c18d-2238-41d7-abe8-e5b6ddba52ba\") " pod="cert-manager/cert-manager-858654f9db-wnbqc" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.298665 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjn66\" (UniqueName: \"kubernetes.io/projected/3c02e749-9e6c-43c2-8aec-e8a4be5c1664-kube-api-access-rjn66\") pod \"cert-manager-cainjector-cf98fcc89-n6ng4\" (UID: \"3c02e749-9e6c-43c2-8aec-e8a4be5c1664\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.374994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46wfp\" (UniqueName: \"kubernetes.io/projected/793aea64-41ee-4933-b96d-c95f08a1b554-kube-api-access-46wfp\") pod \"cert-manager-webhook-687f57d79b-mstfv\" (UID: \"793aea64-41ee-4933-b96d-c95f08a1b554\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.393691 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46wfp\" (UniqueName: \"kubernetes.io/projected/793aea64-41ee-4933-b96d-c95f08a1b554-kube-api-access-46wfp\") pod \"cert-manager-webhook-687f57d79b-mstfv\" (UID: \"793aea64-41ee-4933-b96d-c95f08a1b554\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.479499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.490243 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wnbqc" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.509125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.710556 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4"] Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.735099 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wnbqc"] Mar 12 15:00:16 crc kubenswrapper[4832]: W0312 15:00:16.739995 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a5c18d_2238_41d7_abe8_e5b6ddba52ba.slice/crio-d9e9bf758586e28ca6f48bfdcc2b02a8ac35a0aaf8e6f01e16a461e877649814 WatchSource:0}: Error finding container d9e9bf758586e28ca6f48bfdcc2b02a8ac35a0aaf8e6f01e16a461e877649814: Status 404 returned error can't find the container with id d9e9bf758586e28ca6f48bfdcc2b02a8ac35a0aaf8e6f01e16a461e877649814 Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.768549 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mstfv"] Mar 12 15:00:16 crc kubenswrapper[4832]: W0312 15:00:16.772183 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod793aea64_41ee_4933_b96d_c95f08a1b554.slice/crio-530728a97531c3cbd99dfd7f93701001d692fc8d3fdc3f63b2c6b817bf9b0844 WatchSource:0}: Error finding container 530728a97531c3cbd99dfd7f93701001d692fc8d3fdc3f63b2c6b817bf9b0844: Status 404 returned error can't find the container with id 530728a97531c3cbd99dfd7f93701001d692fc8d3fdc3f63b2c6b817bf9b0844 Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.981121 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wnbqc" event={"ID":"c1a5c18d-2238-41d7-abe8-e5b6ddba52ba","Type":"ContainerStarted","Data":"d9e9bf758586e28ca6f48bfdcc2b02a8ac35a0aaf8e6f01e16a461e877649814"} Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.982486 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" event={"ID":"3c02e749-9e6c-43c2-8aec-e8a4be5c1664","Type":"ContainerStarted","Data":"efa9758518674e7f980deac038493f730532db4a3f244046f0dc1ea5c9aaa13a"} Mar 12 15:00:16 crc kubenswrapper[4832]: I0312 15:00:16.983966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" event={"ID":"793aea64-41ee-4933-b96d-c95f08a1b554","Type":"ContainerStarted","Data":"530728a97531c3cbd99dfd7f93701001d692fc8d3fdc3f63b2c6b817bf9b0844"} Mar 12 15:00:21 crc kubenswrapper[4832]: I0312 15:00:21.008022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" event={"ID":"793aea64-41ee-4933-b96d-c95f08a1b554","Type":"ContainerStarted","Data":"76fd3cf164ef72f09c2db87b2da5d65bddf8efa472d643d6ada653211548c2e2"} Mar 12 15:00:21 crc kubenswrapper[4832]: I0312 15:00:21.008601 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" Mar 12 15:00:21 crc kubenswrapper[4832]: I0312 15:00:21.011178 4832 generic.go:334] "Generic (PLEG): container finished" podID="269dd911-a1c0-453a-ac9b-0b235541e941" containerID="d083ee601fb1cd07d42d4cf6482f91a8dd5368b3a691a496e07dc39027bd9ab6" exitCode=0 Mar 12 15:00:21 crc kubenswrapper[4832]: I0312 15:00:21.011273 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555460-vd8s7" event={"ID":"269dd911-a1c0-453a-ac9b-0b235541e941","Type":"ContainerDied","Data":"d083ee601fb1cd07d42d4cf6482f91a8dd5368b3a691a496e07dc39027bd9ab6"} Mar 12 15:00:21 crc kubenswrapper[4832]: I0312 15:00:21.012825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" event={"ID":"3c02e749-9e6c-43c2-8aec-e8a4be5c1664","Type":"ContainerStarted","Data":"8d36449bcfe132b21f31a0af2dc7746caae1562cbf130c5eebb65d8cc201367a"} Mar 12 15:00:21 crc kubenswrapper[4832]: I0312 15:00:21.024450 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" podStartSLOduration=1.95261882 podStartE2EDuration="5.024433657s" podCreationTimestamp="2026-03-12 15:00:16 +0000 UTC" firstStartedPulling="2026-03-12 15:00:16.7741304 +0000 UTC m=+775.418144626" lastFinishedPulling="2026-03-12 15:00:19.845945247 +0000 UTC m=+778.489959463" observedRunningTime="2026-03-12 15:00:21.023294054 +0000 UTC m=+779.667308300" watchObservedRunningTime="2026-03-12 15:00:21.024433657 +0000 UTC m=+779.668447883" Mar 12 15:00:21 crc kubenswrapper[4832]: I0312 15:00:21.048894 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n6ng4" podStartSLOduration=1.928829805 podStartE2EDuration="5.048874341s" podCreationTimestamp="2026-03-12 15:00:16 +0000 UTC" firstStartedPulling="2026-03-12 15:00:16.718783606 +0000 UTC m=+775.362797842" lastFinishedPulling="2026-03-12 15:00:19.838828152 +0000 UTC m=+778.482842378" observedRunningTime="2026-03-12 15:00:21.047745388 +0000 UTC m=+779.691759644" watchObservedRunningTime="2026-03-12 15:00:21.048874341 +0000 UTC m=+779.692888567" Mar 12 15:00:22 crc kubenswrapper[4832]: I0312 15:00:22.025958 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wnbqc" event={"ID":"c1a5c18d-2238-41d7-abe8-e5b6ddba52ba","Type":"ContainerStarted","Data":"186ce17758b514a6a59cbc8ccdd5ef01b0ccf4beaf04c23e12ba0a4385f6cf50"} Mar 12 15:00:22 crc kubenswrapper[4832]: I0312 15:00:22.051818 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-wnbqc" podStartSLOduration=1.659494619 podStartE2EDuration="6.051064234s" podCreationTimestamp="2026-03-12 15:00:16 +0000 UTC" firstStartedPulling="2026-03-12 15:00:16.74216253 +0000 UTC m=+775.386176756" lastFinishedPulling="2026-03-12 15:00:21.133732145 +0000 UTC m=+779.777746371" observedRunningTime="2026-03-12 15:00:22.048089178 +0000 UTC m=+780.692103464" watchObservedRunningTime="2026-03-12 15:00:22.051064234 +0000 UTC m=+780.695078480" Mar 12 15:00:22 crc kubenswrapper[4832]: I0312 15:00:22.262045 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-vd8s7" Mar 12 15:00:22 crc kubenswrapper[4832]: I0312 15:00:22.354592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgr84\" (UniqueName: \"kubernetes.io/projected/269dd911-a1c0-453a-ac9b-0b235541e941-kube-api-access-wgr84\") pod \"269dd911-a1c0-453a-ac9b-0b235541e941\" (UID: \"269dd911-a1c0-453a-ac9b-0b235541e941\") " Mar 12 15:00:22 crc kubenswrapper[4832]: I0312 15:00:22.367711 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269dd911-a1c0-453a-ac9b-0b235541e941-kube-api-access-wgr84" (OuterVolumeSpecName: "kube-api-access-wgr84") pod "269dd911-a1c0-453a-ac9b-0b235541e941" (UID: "269dd911-a1c0-453a-ac9b-0b235541e941"). InnerVolumeSpecName "kube-api-access-wgr84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:00:22 crc kubenswrapper[4832]: I0312 15:00:22.456101 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgr84\" (UniqueName: \"kubernetes.io/projected/269dd911-a1c0-453a-ac9b-0b235541e941-kube-api-access-wgr84\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:23 crc kubenswrapper[4832]: I0312 15:00:23.032931 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-vd8s7" Mar 12 15:00:23 crc kubenswrapper[4832]: I0312 15:00:23.033324 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555460-vd8s7" event={"ID":"269dd911-a1c0-453a-ac9b-0b235541e941","Type":"ContainerDied","Data":"40987a7a62abe1e5c61abc1fefb3ea78a54620bcb8df4747f35f850d43b93f73"} Mar 12 15:00:23 crc kubenswrapper[4832]: I0312 15:00:23.033347 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40987a7a62abe1e5c61abc1fefb3ea78a54620bcb8df4747f35f850d43b93f73" Mar 12 15:00:23 crc kubenswrapper[4832]: I0312 15:00:23.336196 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-d5dp7"] Mar 12 15:00:23 crc kubenswrapper[4832]: I0312 15:00:23.340304 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-d5dp7"] Mar 12 15:00:24 crc kubenswrapper[4832]: I0312 15:00:24.634080 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae915dcc-7514-4c61-b873-be6de981b06e" path="/var/lib/kubelet/pods/ae915dcc-7514-4c61-b873-be6de981b06e/volumes" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.288264 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5zjpx"] Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.288958 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.289010 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="nbdb" containerID="cri-o://adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.289077 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="northd" containerID="cri-o://7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.289073 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-node" containerID="cri-o://a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.289107 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-acl-logging" containerID="cri-o://4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.289298 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="sbdb" containerID="cri-o://cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.289416 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-controller" containerID="cri-o://0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.315493 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.315651 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.323729 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.327799 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.331905 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.333997 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.334057 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="sbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.334939 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.339871 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.339943 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="nbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.339960 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" containerID="cri-o://22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" gracePeriod=30 Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.362579 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" probeResult="failure" output="" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.512007 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mstfv" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.632294 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/3.log" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.634454 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovn-acl-logging/0.log" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.634980 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovn-controller/0.log" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.635407 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.691742 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vm2m7"] Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692013 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692058 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692075 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kubecfg-setup" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692085 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kubecfg-setup" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692102 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="northd" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692112 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="northd" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692130 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269dd911-a1c0-453a-ac9b-0b235541e941" containerName="oc" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692138 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="269dd911-a1c0-453a-ac9b-0b235541e941" containerName="oc" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692148 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="nbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692155 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="nbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692167 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692174 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692184 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692191 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692203 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-acl-logging" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692210 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-acl-logging" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692221 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-node" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692228 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-node" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692237 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692244 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692254 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="sbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692262 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="sbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692275 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692283 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692292 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692300 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692425 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692446 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="nbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692457 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="northd" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692473 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692490 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692499 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="269dd911-a1c0-453a-ac9b-0b235541e941" containerName="oc" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692530 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692540 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovn-acl-logging" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692549 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692558 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692572 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="kube-rbac-proxy-node" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692585 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="sbdb" Mar 12 15:00:26 crc kubenswrapper[4832]: E0312 15:00:26.692703 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692714 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.692859 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerName="ovnkube-controller" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.694867 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715213 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-systemd-units\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715253 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-systemd\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-node-log\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715300 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-slash\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715358 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-cni-bin\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczgm\" (UniqueName: \"kubernetes.io/projected/de3b723b-e1e3-44be-923b-4c9b2769076c-kube-api-access-xczgm\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715462 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-ovnkube-script-lib\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-etc-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-cni-netd\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715680 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-env-overrides\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715705 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-ovn\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b723b-e1e3-44be-923b-4c9b2769076c-ovn-node-metrics-cert\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715792 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-ovnkube-config\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715845 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-var-lib-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715877 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-run-netns\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-kubelet\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.715942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-log-socket\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816575 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-config\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816619 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816640 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-ovn-kubernetes\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816656 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-systemd\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816676 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-openvswitch\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-script-lib\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816723 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovn-node-metrics-cert\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816722 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816745 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-netd\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-log-socket\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-netns\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816807 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knqcx\" (UniqueName: \"kubernetes.io/projected/18cc235e-1890-485d-8ca2-bf03b2006ab9-kube-api-access-knqcx\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816826 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-var-lib-openvswitch\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816838 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816855 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-bin\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-ovn\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816910 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816919 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-slash\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816921 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816948 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-env-overrides\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816955 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816975 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-node-log\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816980 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-slash" (OuterVolumeSpecName: "host-slash") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816983 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-log-socket" (OuterVolumeSpecName: "log-socket") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.816996 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-kubelet\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817032 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-systemd-units\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817056 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-etc-openvswitch\") pod \"18cc235e-1890-485d-8ca2-bf03b2006ab9\" (UID: \"18cc235e-1890-485d-8ca2-bf03b2006ab9\") " Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817007 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-node-log" (OuterVolumeSpecName: "node-log") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817026 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817052 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817070 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-ovnkube-script-lib\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817155 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-etc-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817189 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-cni-netd\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817206 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-env-overrides\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-ovn\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817254 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b723b-e1e3-44be-923b-4c9b2769076c-ovn-node-metrics-cert\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817262 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-cni-netd\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817268 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-etc-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-ovn\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817338 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-ovnkube-config\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817360 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817379 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817410 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-var-lib-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-run-netns\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817454 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-log-socket\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-var-lib-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-log-socket\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-kubelet\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817494 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-run-netns\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817577 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817597 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-systemd-units\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817605 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-kubelet\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-systemd\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-openvswitch\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817674 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-node-log\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817694 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-run-systemd\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-slash\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817727 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-systemd-units\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-node-log\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-slash\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817775 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-cni-bin\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817830 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-cni-bin\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczgm\" (UniqueName: \"kubernetes.io/projected/de3b723b-e1e3-44be-923b-4c9b2769076c-kube-api-access-xczgm\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817915 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817928 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-env-overrides\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817940 4832 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-log-socket\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817968 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817984 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b723b-e1e3-44be-923b-4c9b2769076c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.817994 4832 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818005 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-ovnkube-script-lib\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818016 4832 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818027 4832 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-slash\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818038 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818049 4832 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-node-log\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818059 4832 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818070 4832 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818080 4832 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818090 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818102 4832 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818114 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818126 4832 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.818492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b723b-e1e3-44be-923b-4c9b2769076c-ovnkube-config\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.823122 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b723b-e1e3-44be-923b-4c9b2769076c-ovn-node-metrics-cert\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.823169 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.823872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18cc235e-1890-485d-8ca2-bf03b2006ab9-kube-api-access-knqcx" (OuterVolumeSpecName: "kube-api-access-knqcx") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "kube-api-access-knqcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.833730 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "18cc235e-1890-485d-8ca2-bf03b2006ab9" (UID: "18cc235e-1890-485d-8ca2-bf03b2006ab9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.833889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczgm\" (UniqueName: \"kubernetes.io/projected/de3b723b-e1e3-44be-923b-4c9b2769076c-kube-api-access-xczgm\") pod \"ovnkube-node-vm2m7\" (UID: \"de3b723b-e1e3-44be-923b-4c9b2769076c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.919598 4832 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18cc235e-1890-485d-8ca2-bf03b2006ab9-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.919651 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18cc235e-1890-485d-8ca2-bf03b2006ab9-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:26 crc kubenswrapper[4832]: I0312 15:00:26.919673 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knqcx\" (UniqueName: \"kubernetes.io/projected/18cc235e-1890-485d-8ca2-bf03b2006ab9-kube-api-access-knqcx\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.009907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:27 crc kubenswrapper[4832]: W0312 15:00:27.038618 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3b723b_e1e3_44be_923b_4c9b2769076c.slice/crio-8cee3f95eb4c3e7e4e068fa1bfe22f94a2183d09afec7b6e983babb50208ca5c WatchSource:0}: Error finding container 8cee3f95eb4c3e7e4e068fa1bfe22f94a2183d09afec7b6e983babb50208ca5c: Status 404 returned error can't find the container with id 8cee3f95eb4c3e7e4e068fa1bfe22f94a2183d09afec7b6e983babb50208ca5c Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.063235 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/2.log" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.063975 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/1.log" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.064051 4832 generic.go:334] "Generic (PLEG): container finished" podID="7c82e050-0168-4210-bb2d-7d8bbbc5e74e" containerID="1c43e6d8173102d19ca758d7fb313a4e1c96a4f798e4602c17d35e077db030cd" exitCode=2 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.064145 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerDied","Data":"1c43e6d8173102d19ca758d7fb313a4e1c96a4f798e4602c17d35e077db030cd"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.064248 4832 scope.go:117] "RemoveContainer" containerID="3ec915d4a1c18059ebeeea82f6fea8505e9b88e684f7cded7c4ac243189d7ec0" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.065758 4832 scope.go:117] "RemoveContainer" containerID="1c43e6d8173102d19ca758d7fb313a4e1c96a4f798e4602c17d35e077db030cd" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.066183 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c2phv_openshift-multus(7c82e050-0168-4210-bb2d-7d8bbbc5e74e)\"" pod="openshift-multus/multus-c2phv" podUID="7c82e050-0168-4210-bb2d-7d8bbbc5e74e" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.070430 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovnkube-controller/3.log" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.076203 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovn-acl-logging/0.log" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.076698 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5zjpx_18cc235e-1890-485d-8ca2-bf03b2006ab9/ovn-controller/0.log" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077047 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" exitCode=0 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077083 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" exitCode=0 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077098 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" exitCode=0 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077110 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" exitCode=0 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077124 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" exitCode=0 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077136 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" exitCode=0 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077149 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" exitCode=143 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077163 4832 generic.go:334] "Generic (PLEG): container finished" podID="18cc235e-1890-485d-8ca2-bf03b2006ab9" containerID="0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" exitCode=143 Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077186 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077121 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077430 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077443 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077454 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077460 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077467 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077473 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077479 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077484 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077489 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077496 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077518 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077526 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077535 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077543 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077548 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077553 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077559 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077564 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077568 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077573 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077578 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077583 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077598 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077604 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077609 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077615 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077621 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077626 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077631 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077636 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077641 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077647 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077653 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zjpx" event={"ID":"18cc235e-1890-485d-8ca2-bf03b2006ab9","Type":"ContainerDied","Data":"b621bec7a83b49b63024462040fc9075077debcc3ba22ca56dddbb2ff9e63e94"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077661 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077666 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077672 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077677 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077683 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077689 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077696 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077701 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077706 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.077711 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.081268 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"8cee3f95eb4c3e7e4e068fa1bfe22f94a2183d09afec7b6e983babb50208ca5c"} Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.118922 4832 scope.go:117] "RemoveContainer" containerID="22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.133603 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5zjpx"] Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.138409 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5zjpx"] Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.146438 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.167118 4832 scope.go:117] "RemoveContainer" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.231652 4832 scope.go:117] "RemoveContainer" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.247462 4832 scope.go:117] "RemoveContainer" containerID="7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.259650 4832 scope.go:117] "RemoveContainer" containerID="be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.290767 4832 scope.go:117] "RemoveContainer" containerID="a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.315026 4832 scope.go:117] "RemoveContainer" containerID="4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.335192 4832 scope.go:117] "RemoveContainer" containerID="0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.355062 4832 scope.go:117] "RemoveContainer" containerID="873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.371614 4832 scope.go:117] "RemoveContainer" containerID="22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.372080 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": container with ID starting with 22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988 not found: ID does not exist" containerID="22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.372108 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} err="failed to get container status \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": rpc error: code = NotFound desc = could not find container \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": container with ID starting with 22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.372130 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.372461 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": container with ID starting with 1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f not found: ID does not exist" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.372481 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} err="failed to get container status \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": rpc error: code = NotFound desc = could not find container \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": container with ID starting with 1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.372495 4832 scope.go:117] "RemoveContainer" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.373026 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": container with ID starting with cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392 not found: ID does not exist" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.373083 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} err="failed to get container status \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": rpc error: code = NotFound desc = could not find container \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": container with ID starting with cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.373116 4832 scope.go:117] "RemoveContainer" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.373432 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": container with ID starting with adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d not found: ID does not exist" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.373462 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} err="failed to get container status \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": rpc error: code = NotFound desc = could not find container \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": container with ID starting with adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.373487 4832 scope.go:117] "RemoveContainer" containerID="7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.373817 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": container with ID starting with 7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224 not found: ID does not exist" containerID="7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.373839 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} err="failed to get container status \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": rpc error: code = NotFound desc = could not find container \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": container with ID starting with 7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.373853 4832 scope.go:117] "RemoveContainer" containerID="be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.374086 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": container with ID starting with be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85 not found: ID does not exist" containerID="be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.374129 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} err="failed to get container status \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": rpc error: code = NotFound desc = could not find container \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": container with ID starting with be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.374148 4832 scope.go:117] "RemoveContainer" containerID="a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.374437 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": container with ID starting with a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2 not found: ID does not exist" containerID="a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.374461 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} err="failed to get container status \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": rpc error: code = NotFound desc = could not find container \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": container with ID starting with a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.374476 4832 scope.go:117] "RemoveContainer" containerID="4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.374764 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": container with ID starting with 4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be not found: ID does not exist" containerID="4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.374798 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} err="failed to get container status \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": rpc error: code = NotFound desc = could not find container \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": container with ID starting with 4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.374818 4832 scope.go:117] "RemoveContainer" containerID="0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.375112 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": container with ID starting with 0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b not found: ID does not exist" containerID="0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375141 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} err="failed to get container status \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": rpc error: code = NotFound desc = could not find container \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": container with ID starting with 0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375156 4832 scope.go:117] "RemoveContainer" containerID="873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f" Mar 12 15:00:27 crc kubenswrapper[4832]: E0312 15:00:27.375380 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": container with ID starting with 873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f not found: ID does not exist" containerID="873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375412 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} err="failed to get container status \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": rpc error: code = NotFound desc = could not find container \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": container with ID starting with 873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375437 4832 scope.go:117] "RemoveContainer" containerID="22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375671 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} err="failed to get container status \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": rpc error: code = NotFound desc = could not find container \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": container with ID starting with 22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375691 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375919 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} err="failed to get container status \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": rpc error: code = NotFound desc = could not find container \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": container with ID starting with 1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.375937 4832 scope.go:117] "RemoveContainer" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.376140 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} err="failed to get container status \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": rpc error: code = NotFound desc = could not find container \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": container with ID starting with cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.376166 4832 scope.go:117] "RemoveContainer" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.376455 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} err="failed to get container status \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": rpc error: code = NotFound desc = could not find container \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": container with ID starting with adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.376478 4832 scope.go:117] "RemoveContainer" containerID="7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.376756 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} err="failed to get container status \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": rpc error: code = NotFound desc = could not find container \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": container with ID starting with 7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.376774 4832 scope.go:117] "RemoveContainer" containerID="be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.377114 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} err="failed to get container status \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": rpc error: code = NotFound desc = could not find container \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": container with ID starting with be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.377142 4832 scope.go:117] "RemoveContainer" containerID="a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.377433 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} err="failed to get container status \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": rpc error: code = NotFound desc = could not find container \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": container with ID starting with a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.377452 4832 scope.go:117] "RemoveContainer" containerID="4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.377796 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} err="failed to get container status \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": rpc error: code = NotFound desc = could not find container \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": container with ID starting with 4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.377843 4832 scope.go:117] "RemoveContainer" containerID="0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.378195 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} err="failed to get container status \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": rpc error: code = NotFound desc = could not find container \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": container with ID starting with 0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.378234 4832 scope.go:117] "RemoveContainer" containerID="873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.378484 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} err="failed to get container status \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": rpc error: code = NotFound desc = could not find container \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": container with ID starting with 873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.378539 4832 scope.go:117] "RemoveContainer" containerID="22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.378796 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} err="failed to get container status \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": rpc error: code = NotFound desc = could not find container \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": container with ID starting with 22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.378815 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379083 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} err="failed to get container status \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": rpc error: code = NotFound desc = could not find container \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": container with ID starting with 1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379108 4832 scope.go:117] "RemoveContainer" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379356 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} err="failed to get container status \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": rpc error: code = NotFound desc = could not find container \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": container with ID starting with cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379379 4832 scope.go:117] "RemoveContainer" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379620 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} err="failed to get container status \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": rpc error: code = NotFound desc = could not find container \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": container with ID starting with adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379641 4832 scope.go:117] "RemoveContainer" containerID="7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379902 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} err="failed to get container status \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": rpc error: code = NotFound desc = could not find container \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": container with ID starting with 7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.379937 4832 scope.go:117] "RemoveContainer" containerID="be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380199 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} err="failed to get container status \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": rpc error: code = NotFound desc = could not find container \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": container with ID starting with be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380231 4832 scope.go:117] "RemoveContainer" containerID="a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380481 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} err="failed to get container status \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": rpc error: code = NotFound desc = could not find container \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": container with ID starting with a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380499 4832 scope.go:117] "RemoveContainer" containerID="4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380714 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} err="failed to get container status \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": rpc error: code = NotFound desc = could not find container \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": container with ID starting with 4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380735 4832 scope.go:117] "RemoveContainer" containerID="0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380970 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} err="failed to get container status \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": rpc error: code = NotFound desc = could not find container \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": container with ID starting with 0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.380996 4832 scope.go:117] "RemoveContainer" containerID="873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.381368 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} err="failed to get container status \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": rpc error: code = NotFound desc = could not find container \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": container with ID starting with 873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.381389 4832 scope.go:117] "RemoveContainer" containerID="22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.381637 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988"} err="failed to get container status \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": rpc error: code = NotFound desc = could not find container \"22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988\": container with ID starting with 22a3c2bd37e35f451fe10f24d4d805f86fc0e1262e0521d60692a18666906988 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.381661 4832 scope.go:117] "RemoveContainer" containerID="1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.381861 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f"} err="failed to get container status \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": rpc error: code = NotFound desc = could not find container \"1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f\": container with ID starting with 1f44cc434d1599aa495fb7aeab5cfc41349f4d3cbcd66d43d70b5e62d953295f not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.381880 4832 scope.go:117] "RemoveContainer" containerID="cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382128 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392"} err="failed to get container status \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": rpc error: code = NotFound desc = could not find container \"cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392\": container with ID starting with cc5c44d2b4c89f19a666b77aae7644612f18d7e2aa0b66dfd6c9da0365e1c392 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382158 4832 scope.go:117] "RemoveContainer" containerID="adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382441 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d"} err="failed to get container status \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": rpc error: code = NotFound desc = could not find container \"adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d\": container with ID starting with adcb617a1202f79136712c7f82f7002f6ea6f233ba4dac5cbd578742bd36dc2d not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382470 4832 scope.go:117] "RemoveContainer" containerID="7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382728 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224"} err="failed to get container status \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": rpc error: code = NotFound desc = could not find container \"7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224\": container with ID starting with 7ac166d26fe0c01989f55c682fe63afce71beefeaa1410cc20a2df249923e224 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382752 4832 scope.go:117] "RemoveContainer" containerID="be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382964 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85"} err="failed to get container status \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": rpc error: code = NotFound desc = could not find container \"be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85\": container with ID starting with be3a43125334370f71eae1ec76d26cac04f6ebd12c4ce48e4e41be4af0e42d85 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.382983 4832 scope.go:117] "RemoveContainer" containerID="a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.383392 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2"} err="failed to get container status \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": rpc error: code = NotFound desc = could not find container \"a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2\": container with ID starting with a7fd3e147e4977e040dc08aec0abe47f9b9454f9a03cccb0a82afe70c5ee4ef2 not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.383419 4832 scope.go:117] "RemoveContainer" containerID="4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.383666 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be"} err="failed to get container status \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": rpc error: code = NotFound desc = could not find container \"4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be\": container with ID starting with 4a224047f621107f38b62bdc68495d572aacd4e7c9d49b07500d3d57913d65be not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.383717 4832 scope.go:117] "RemoveContainer" containerID="0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.384048 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b"} err="failed to get container status \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": rpc error: code = NotFound desc = could not find container \"0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b\": container with ID starting with 0f701ea96d5ac62be2b399cf969c8a7a269e0783eb67af1aa0cd90115fc6543b not found: ID does not exist" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.384068 4832 scope.go:117] "RemoveContainer" containerID="873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f" Mar 12 15:00:27 crc kubenswrapper[4832]: I0312 15:00:27.384332 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f"} err="failed to get container status \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": rpc error: code = NotFound desc = could not find container \"873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f\": container with ID starting with 873f098546bc69b0adb7b3d23a41319b783ca2d28ca9bc2b48aacd659b8d4c7f not found: ID does not exist" Mar 12 15:00:28 crc kubenswrapper[4832]: I0312 15:00:28.089769 4832 generic.go:334] "Generic (PLEG): container finished" podID="de3b723b-e1e3-44be-923b-4c9b2769076c" containerID="973e7c0ac9a763fe03ae85d4bca368b45d22a9aced9fbbd7547e89ba678bf9cf" exitCode=0 Mar 12 15:00:28 crc kubenswrapper[4832]: I0312 15:00:28.089862 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerDied","Data":"973e7c0ac9a763fe03ae85d4bca368b45d22a9aced9fbbd7547e89ba678bf9cf"} Mar 12 15:00:28 crc kubenswrapper[4832]: I0312 15:00:28.091865 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/2.log" Mar 12 15:00:28 crc kubenswrapper[4832]: I0312 15:00:28.627459 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18cc235e-1890-485d-8ca2-bf03b2006ab9" path="/var/lib/kubelet/pods/18cc235e-1890-485d-8ca2-bf03b2006ab9/volumes" Mar 12 15:00:29 crc kubenswrapper[4832]: I0312 15:00:29.101989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"765c24037a354f478f20e187a8e583c32519d2eda15617760ad4c4d865986d22"} Mar 12 15:00:29 crc kubenswrapper[4832]: I0312 15:00:29.102547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"b6265cc0a9407c1af44ca07218e2ed91c9a35646e550d5d60bfc0083c3af34f8"} Mar 12 15:00:29 crc kubenswrapper[4832]: I0312 15:00:29.102635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"2d49d8864dd9feee4d6d4888f65ed9b8fc3f411f4db6311e51c54f3e211ba7b5"} Mar 12 15:00:29 crc kubenswrapper[4832]: I0312 15:00:29.102750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"2f21c1f625060aa0eb9c5267e53122aee592f952e899d55b47bdce531e3834df"} Mar 12 15:00:29 crc kubenswrapper[4832]: I0312 15:00:29.102827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"9da7b456175170067023d435c4860f5ccfdc1928291155ba8d84ae91ec5860c0"} Mar 12 15:00:29 crc kubenswrapper[4832]: I0312 15:00:29.102930 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"8762ba54bcc04c28e2d046adc5e01f6263fbac082eea535f3c8b5988b57911ae"} Mar 12 15:00:31 crc kubenswrapper[4832]: I0312 15:00:31.117347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"3230f84c2e4184231290c315b7cddf7d816d0b82534cd290dd5cdc4f06d182b5"} Mar 12 15:00:34 crc kubenswrapper[4832]: I0312 15:00:34.138385 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" event={"ID":"de3b723b-e1e3-44be-923b-4c9b2769076c","Type":"ContainerStarted","Data":"6232d8ce749ef500f8d64adcb14c27a264bf41237b449e4d10cda898427ff50f"} Mar 12 15:00:34 crc kubenswrapper[4832]: I0312 15:00:34.138965 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:34 crc kubenswrapper[4832]: I0312 15:00:34.138981 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:34 crc kubenswrapper[4832]: I0312 15:00:34.173980 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" podStartSLOduration=8.173957655 podStartE2EDuration="8.173957655s" podCreationTimestamp="2026-03-12 15:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:00:34.167966603 +0000 UTC m=+792.811980819" watchObservedRunningTime="2026-03-12 15:00:34.173957655 +0000 UTC m=+792.817971881" Mar 12 15:00:34 crc kubenswrapper[4832]: I0312 15:00:34.182056 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:35 crc kubenswrapper[4832]: I0312 15:00:35.143404 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:35 crc kubenswrapper[4832]: I0312 15:00:35.168559 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:00:40 crc kubenswrapper[4832]: I0312 15:00:40.619216 4832 scope.go:117] "RemoveContainer" containerID="1c43e6d8173102d19ca758d7fb313a4e1c96a4f798e4602c17d35e077db030cd" Mar 12 15:00:40 crc kubenswrapper[4832]: E0312 15:00:40.619894 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c2phv_openshift-multus(7c82e050-0168-4210-bb2d-7d8bbbc5e74e)\"" pod="openshift-multus/multus-c2phv" podUID="7c82e050-0168-4210-bb2d-7d8bbbc5e74e" Mar 12 15:00:41 crc kubenswrapper[4832]: I0312 15:00:41.317410 4832 scope.go:117] "RemoveContainer" containerID="2907439dd9e5ace54963327600bfbddd9b6de6e6937b1af187041060df6dfbe4" Mar 12 15:00:54 crc kubenswrapper[4832]: I0312 15:00:54.619285 4832 scope.go:117] "RemoveContainer" containerID="1c43e6d8173102d19ca758d7fb313a4e1c96a4f798e4602c17d35e077db030cd" Mar 12 15:00:55 crc kubenswrapper[4832]: I0312 15:00:55.274889 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c2phv_7c82e050-0168-4210-bb2d-7d8bbbc5e74e/kube-multus/2.log" Mar 12 15:00:55 crc kubenswrapper[4832]: I0312 15:00:55.275215 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c2phv" event={"ID":"7c82e050-0168-4210-bb2d-7d8bbbc5e74e","Type":"ContainerStarted","Data":"78a4518696e51eda8c1274bd519ae85df3090248959e0cc08072ff2c983f36a0"} Mar 12 15:00:56 crc kubenswrapper[4832]: I0312 15:00:56.314648 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:00:56 crc kubenswrapper[4832]: I0312 15:00:56.315073 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:00:57 crc kubenswrapper[4832]: I0312 15:00:57.032812 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vm2m7" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.265758 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b"] Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.267089 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.268578 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.274632 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b"] Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.286807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ks9m\" (UniqueName: \"kubernetes.io/projected/c89f0058-b036-4452-b358-f49f86a66fb7-kube-api-access-7ks9m\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.286853 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.286911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.388191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ks9m\" (UniqueName: \"kubernetes.io/projected/c89f0058-b036-4452-b358-f49f86a66fb7-kube-api-access-7ks9m\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.388579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.388656 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.389122 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.389200 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.408818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ks9m\" (UniqueName: \"kubernetes.io/projected/c89f0058-b036-4452-b358-f49f86a66fb7-kube-api-access-7ks9m\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.590337 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:02 crc kubenswrapper[4832]: I0312 15:01:02.973021 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b"] Mar 12 15:01:02 crc kubenswrapper[4832]: W0312 15:01:02.984106 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc89f0058_b036_4452_b358_f49f86a66fb7.slice/crio-4260cecb5bc45236411eafc2d5b85bbf945674f71cbea80f26bec44e9445ae43 WatchSource:0}: Error finding container 4260cecb5bc45236411eafc2d5b85bbf945674f71cbea80f26bec44e9445ae43: Status 404 returned error can't find the container with id 4260cecb5bc45236411eafc2d5b85bbf945674f71cbea80f26bec44e9445ae43 Mar 12 15:01:03 crc kubenswrapper[4832]: I0312 15:01:03.324841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" event={"ID":"c89f0058-b036-4452-b358-f49f86a66fb7","Type":"ContainerStarted","Data":"13db567900ce1eaf1ba43e233ec5779ed7a1df550ccc35b2b507350417d0b3bd"} Mar 12 15:01:03 crc kubenswrapper[4832]: I0312 15:01:03.325669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" event={"ID":"c89f0058-b036-4452-b358-f49f86a66fb7","Type":"ContainerStarted","Data":"4260cecb5bc45236411eafc2d5b85bbf945674f71cbea80f26bec44e9445ae43"} Mar 12 15:01:04 crc kubenswrapper[4832]: I0312 15:01:04.332837 4832 generic.go:334] "Generic (PLEG): container finished" podID="c89f0058-b036-4452-b358-f49f86a66fb7" containerID="13db567900ce1eaf1ba43e233ec5779ed7a1df550ccc35b2b507350417d0b3bd" exitCode=0 Mar 12 15:01:04 crc kubenswrapper[4832]: I0312 15:01:04.332904 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" event={"ID":"c89f0058-b036-4452-b358-f49f86a66fb7","Type":"ContainerDied","Data":"13db567900ce1eaf1ba43e233ec5779ed7a1df550ccc35b2b507350417d0b3bd"} Mar 12 15:01:04 crc kubenswrapper[4832]: I0312 15:01:04.336325 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:01:06 crc kubenswrapper[4832]: I0312 15:01:06.345069 4832 generic.go:334] "Generic (PLEG): container finished" podID="c89f0058-b036-4452-b358-f49f86a66fb7" containerID="b31f9f1e62904862a3519d09e114866544f8dabf92aa0676665d73453bc736dd" exitCode=0 Mar 12 15:01:06 crc kubenswrapper[4832]: I0312 15:01:06.345109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" event={"ID":"c89f0058-b036-4452-b358-f49f86a66fb7","Type":"ContainerDied","Data":"b31f9f1e62904862a3519d09e114866544f8dabf92aa0676665d73453bc736dd"} Mar 12 15:01:07 crc kubenswrapper[4832]: I0312 15:01:07.351908 4832 generic.go:334] "Generic (PLEG): container finished" podID="c89f0058-b036-4452-b358-f49f86a66fb7" containerID="7d69178831dabee6cffc510a7f657246e3d8d74f5cfe8e5758c8641c6c05692e" exitCode=0 Mar 12 15:01:07 crc kubenswrapper[4832]: I0312 15:01:07.352004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" event={"ID":"c89f0058-b036-4452-b358-f49f86a66fb7","Type":"ContainerDied","Data":"7d69178831dabee6cffc510a7f657246e3d8d74f5cfe8e5758c8641c6c05692e"} Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.626735 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.668560 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-bundle\") pod \"c89f0058-b036-4452-b358-f49f86a66fb7\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.668614 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-util\") pod \"c89f0058-b036-4452-b358-f49f86a66fb7\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.668671 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ks9m\" (UniqueName: \"kubernetes.io/projected/c89f0058-b036-4452-b358-f49f86a66fb7-kube-api-access-7ks9m\") pod \"c89f0058-b036-4452-b358-f49f86a66fb7\" (UID: \"c89f0058-b036-4452-b358-f49f86a66fb7\") " Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.669374 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-bundle" (OuterVolumeSpecName: "bundle") pod "c89f0058-b036-4452-b358-f49f86a66fb7" (UID: "c89f0058-b036-4452-b358-f49f86a66fb7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.669874 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.676012 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89f0058-b036-4452-b358-f49f86a66fb7-kube-api-access-7ks9m" (OuterVolumeSpecName: "kube-api-access-7ks9m") pod "c89f0058-b036-4452-b358-f49f86a66fb7" (UID: "c89f0058-b036-4452-b358-f49f86a66fb7"). InnerVolumeSpecName "kube-api-access-7ks9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.683392 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-util" (OuterVolumeSpecName: "util") pod "c89f0058-b036-4452-b358-f49f86a66fb7" (UID: "c89f0058-b036-4452-b358-f49f86a66fb7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.771471 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c89f0058-b036-4452-b358-f49f86a66fb7-util\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.771579 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ks9m\" (UniqueName: \"kubernetes.io/projected/c89f0058-b036-4452-b358-f49f86a66fb7-kube-api-access-7ks9m\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.825649 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8q7d"] Mar 12 15:01:08 crc kubenswrapper[4832]: E0312 15:01:08.825940 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89f0058-b036-4452-b358-f49f86a66fb7" containerName="extract" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.825967 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89f0058-b036-4452-b358-f49f86a66fb7" containerName="extract" Mar 12 15:01:08 crc kubenswrapper[4832]: E0312 15:01:08.825993 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89f0058-b036-4452-b358-f49f86a66fb7" containerName="pull" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.826007 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89f0058-b036-4452-b358-f49f86a66fb7" containerName="pull" Mar 12 15:01:08 crc kubenswrapper[4832]: E0312 15:01:08.826028 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89f0058-b036-4452-b358-f49f86a66fb7" containerName="util" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.826040 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89f0058-b036-4452-b358-f49f86a66fb7" containerName="util" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.826224 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89f0058-b036-4452-b358-f49f86a66fb7" containerName="extract" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.827772 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.850681 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8q7d"] Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.872418 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-catalog-content\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.872466 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-utilities\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.872521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2z9r\" (UniqueName: \"kubernetes.io/projected/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-kube-api-access-f2z9r\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.973682 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-catalog-content\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.973728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-utilities\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.973763 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2z9r\" (UniqueName: \"kubernetes.io/projected/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-kube-api-access-f2z9r\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.974178 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-catalog-content\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.974228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-utilities\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:08 crc kubenswrapper[4832]: I0312 15:01:08.993597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2z9r\" (UniqueName: \"kubernetes.io/projected/24ad1ef0-306a-47c0-95bb-7f3a55a471ea-kube-api-access-f2z9r\") pod \"redhat-operators-s8q7d\" (UID: \"24ad1ef0-306a-47c0-95bb-7f3a55a471ea\") " pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:09 crc kubenswrapper[4832]: I0312 15:01:09.164554 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:09 crc kubenswrapper[4832]: I0312 15:01:09.342319 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8q7d"] Mar 12 15:01:09 crc kubenswrapper[4832]: W0312 15:01:09.351199 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24ad1ef0_306a_47c0_95bb_7f3a55a471ea.slice/crio-17e8a5eedae6c91199fb874d20dc61c672ca8ef704fa8d0687f8c6a705f83e7e WatchSource:0}: Error finding container 17e8a5eedae6c91199fb874d20dc61c672ca8ef704fa8d0687f8c6a705f83e7e: Status 404 returned error can't find the container with id 17e8a5eedae6c91199fb874d20dc61c672ca8ef704fa8d0687f8c6a705f83e7e Mar 12 15:01:09 crc kubenswrapper[4832]: I0312 15:01:09.366036 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" event={"ID":"c89f0058-b036-4452-b358-f49f86a66fb7","Type":"ContainerDied","Data":"4260cecb5bc45236411eafc2d5b85bbf945674f71cbea80f26bec44e9445ae43"} Mar 12 15:01:09 crc kubenswrapper[4832]: I0312 15:01:09.366087 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4260cecb5bc45236411eafc2d5b85bbf945674f71cbea80f26bec44e9445ae43" Mar 12 15:01:09 crc kubenswrapper[4832]: I0312 15:01:09.366044 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b" Mar 12 15:01:09 crc kubenswrapper[4832]: I0312 15:01:09.367119 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8q7d" event={"ID":"24ad1ef0-306a-47c0-95bb-7f3a55a471ea","Type":"ContainerStarted","Data":"17e8a5eedae6c91199fb874d20dc61c672ca8ef704fa8d0687f8c6a705f83e7e"} Mar 12 15:01:11 crc kubenswrapper[4832]: E0312 15:01:11.645540 4832 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.026s" Mar 12 15:01:11 crc kubenswrapper[4832]: I0312 15:01:11.654797 4832 generic.go:334] "Generic (PLEG): container finished" podID="24ad1ef0-306a-47c0-95bb-7f3a55a471ea" containerID="1e519bb959a52d0521180c19482bd64a326f83e9361be5d74ed31e3012536a0a" exitCode=0 Mar 12 15:01:11 crc kubenswrapper[4832]: I0312 15:01:11.654836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8q7d" event={"ID":"24ad1ef0-306a-47c0-95bb-7f3a55a471ea","Type":"ContainerDied","Data":"1e519bb959a52d0521180c19482bd64a326f83e9361be5d74ed31e3012536a0a"} Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.844004 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs"] Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.845060 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.847062 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.847161 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vj5db" Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.847740 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.854966 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs"] Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.874396 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k97d9\" (UniqueName: \"kubernetes.io/projected/fe09f7b6-ef26-402f-9890-d0cf00dde01b-kube-api-access-k97d9\") pod \"nmstate-operator-796d4cfff4-qfcxs\" (UID: \"fe09f7b6-ef26-402f-9890-d0cf00dde01b\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.975624 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k97d9\" (UniqueName: \"kubernetes.io/projected/fe09f7b6-ef26-402f-9890-d0cf00dde01b-kube-api-access-k97d9\") pod \"nmstate-operator-796d4cfff4-qfcxs\" (UID: \"fe09f7b6-ef26-402f-9890-d0cf00dde01b\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" Mar 12 15:01:13 crc kubenswrapper[4832]: I0312 15:01:13.993920 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k97d9\" (UniqueName: \"kubernetes.io/projected/fe09f7b6-ef26-402f-9890-d0cf00dde01b-kube-api-access-k97d9\") pod \"nmstate-operator-796d4cfff4-qfcxs\" (UID: \"fe09f7b6-ef26-402f-9890-d0cf00dde01b\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" Mar 12 15:01:14 crc kubenswrapper[4832]: I0312 15:01:14.185025 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 15:01:14 crc kubenswrapper[4832]: I0312 15:01:14.217044 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" Mar 12 15:01:14 crc kubenswrapper[4832]: I0312 15:01:14.652469 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs"] Mar 12 15:01:15 crc kubenswrapper[4832]: I0312 15:01:15.675611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" event={"ID":"fe09f7b6-ef26-402f-9890-d0cf00dde01b","Type":"ContainerStarted","Data":"150eb321fe6f7a9b9d8d90e91d620acd28aa1ab0bef2ce20c58ed7406e89d8de"} Mar 12 15:01:20 crc kubenswrapper[4832]: I0312 15:01:20.701255 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" event={"ID":"fe09f7b6-ef26-402f-9890-d0cf00dde01b","Type":"ContainerStarted","Data":"e902c634f70d91f6e2f912afe82a0713058c6ab66f77873c7815568401a17c1f"} Mar 12 15:01:20 crc kubenswrapper[4832]: I0312 15:01:20.703454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8q7d" event={"ID":"24ad1ef0-306a-47c0-95bb-7f3a55a471ea","Type":"ContainerStarted","Data":"7cf3523bf99a3bdafe0b571226a1152ae20d8f19799b340baa16398b647d6fb0"} Mar 12 15:01:20 crc kubenswrapper[4832]: I0312 15:01:20.725691 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qfcxs" podStartSLOduration=2.039683539 podStartE2EDuration="7.725667993s" podCreationTimestamp="2026-03-12 15:01:13 +0000 UTC" firstStartedPulling="2026-03-12 15:01:14.676307214 +0000 UTC m=+833.320321440" lastFinishedPulling="2026-03-12 15:01:20.362291678 +0000 UTC m=+839.006305894" observedRunningTime="2026-03-12 15:01:20.723744158 +0000 UTC m=+839.367758404" watchObservedRunningTime="2026-03-12 15:01:20.725667993 +0000 UTC m=+839.369682239" Mar 12 15:01:21 crc kubenswrapper[4832]: I0312 15:01:21.710665 4832 generic.go:334] "Generic (PLEG): container finished" podID="24ad1ef0-306a-47c0-95bb-7f3a55a471ea" containerID="7cf3523bf99a3bdafe0b571226a1152ae20d8f19799b340baa16398b647d6fb0" exitCode=0 Mar 12 15:01:21 crc kubenswrapper[4832]: I0312 15:01:21.710750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8q7d" event={"ID":"24ad1ef0-306a-47c0-95bb-7f3a55a471ea","Type":"ContainerDied","Data":"7cf3523bf99a3bdafe0b571226a1152ae20d8f19799b340baa16398b647d6fb0"} Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.717610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8q7d" event={"ID":"24ad1ef0-306a-47c0-95bb-7f3a55a471ea","Type":"ContainerStarted","Data":"36100e93acc875b0e06b4c3e32c1ff52745381f8e854e3a091e527e1253800e9"} Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.743538 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8q7d" podStartSLOduration=4.243344236 podStartE2EDuration="14.743521517s" podCreationTimestamp="2026-03-12 15:01:08 +0000 UTC" firstStartedPulling="2026-03-12 15:01:11.656211387 +0000 UTC m=+830.300225603" lastFinishedPulling="2026-03-12 15:01:22.156388658 +0000 UTC m=+840.800402884" observedRunningTime="2026-03-12 15:01:22.739393848 +0000 UTC m=+841.383408084" watchObservedRunningTime="2026-03-12 15:01:22.743521517 +0000 UTC m=+841.387535733" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.841146 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz"] Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.842155 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.843870 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sgssp" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.857960 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz"] Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.864754 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mhmks"] Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.865846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.877985 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.889829 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mhmks"] Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.902824 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wg6lb"] Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.903489 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.983908 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv"] Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.984849 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986199 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-dbus-socket\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986260 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mj92\" (UniqueName: \"kubernetes.io/projected/d5f4d3bf-8464-490f-9874-1442a1e08a2c-kube-api-access-6mj92\") pod \"nmstate-webhook-5f558f5558-mhmks\" (UID: \"d5f4d3bf-8464-490f-9874-1442a1e08a2c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986291 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gj4v\" (UniqueName: \"kubernetes.io/projected/3f01f5a7-2adf-424c-9302-a8469626d969-kube-api-access-6gj4v\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlkw\" (UniqueName: \"kubernetes.io/projected/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-kube-api-access-9dlkw\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d5f4d3bf-8464-490f-9874-1442a1e08a2c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mhmks\" (UID: \"d5f4d3bf-8464-490f-9874-1442a1e08a2c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f01f5a7-2adf-424c-9302-a8469626d969-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986370 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-ovs-socket\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986388 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3f01f5a7-2adf-424c-9302-a8469626d969-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-nmstate-lock\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.986439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzbnm\" (UniqueName: \"kubernetes.io/projected/0d538149-e77a-4bda-8a62-ef47cfc27f04-kube-api-access-bzbnm\") pod \"nmstate-metrics-9b8c8685d-nk4bz\" (UID: \"0d538149-e77a-4bda-8a62-ef47cfc27f04\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.988115 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.988327 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.988439 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-r2h48" Mar 12 15:01:22 crc kubenswrapper[4832]: I0312 15:01:22.992714 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv"] Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.087291 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d5f4d3bf-8464-490f-9874-1442a1e08a2c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mhmks\" (UID: \"d5f4d3bf-8464-490f-9874-1442a1e08a2c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.087615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f01f5a7-2adf-424c-9302-a8469626d969-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.087713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-ovs-socket\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.087800 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3f01f5a7-2adf-424c-9302-a8469626d969-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.087878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-nmstate-lock\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.087967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzbnm\" (UniqueName: \"kubernetes.io/projected/0d538149-e77a-4bda-8a62-ef47cfc27f04-kube-api-access-bzbnm\") pod \"nmstate-metrics-9b8c8685d-nk4bz\" (UID: \"0d538149-e77a-4bda-8a62-ef47cfc27f04\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.088045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-dbus-socket\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.088313 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-dbus-socket\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: E0312 15:01:23.088296 4832 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.088357 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mj92\" (UniqueName: \"kubernetes.io/projected/d5f4d3bf-8464-490f-9874-1442a1e08a2c-kube-api-access-6mj92\") pod \"nmstate-webhook-5f558f5558-mhmks\" (UID: \"d5f4d3bf-8464-490f-9874-1442a1e08a2c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.088604 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gj4v\" (UniqueName: \"kubernetes.io/projected/3f01f5a7-2adf-424c-9302-a8469626d969-kube-api-access-6gj4v\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.088710 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlkw\" (UniqueName: \"kubernetes.io/projected/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-kube-api-access-9dlkw\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: E0312 15:01:23.088737 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f01f5a7-2adf-424c-9302-a8469626d969-plugin-serving-cert podName:3f01f5a7-2adf-424c-9302-a8469626d969 nodeName:}" failed. No retries permitted until 2026-03-12 15:01:23.588713638 +0000 UTC m=+842.232727864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3f01f5a7-2adf-424c-9302-a8469626d969-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-2s9jv" (UID: "3f01f5a7-2adf-424c-9302-a8469626d969") : secret "plugin-serving-cert" not found Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.088894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-ovs-socket\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.089301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-nmstate-lock\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.089775 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3f01f5a7-2adf-424c-9302-a8469626d969-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.103201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d5f4d3bf-8464-490f-9874-1442a1e08a2c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mhmks\" (UID: \"d5f4d3bf-8464-490f-9874-1442a1e08a2c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.111137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mj92\" (UniqueName: \"kubernetes.io/projected/d5f4d3bf-8464-490f-9874-1442a1e08a2c-kube-api-access-6mj92\") pod \"nmstate-webhook-5f558f5558-mhmks\" (UID: \"d5f4d3bf-8464-490f-9874-1442a1e08a2c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.115270 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gj4v\" (UniqueName: \"kubernetes.io/projected/3f01f5a7-2adf-424c-9302-a8469626d969-kube-api-access-6gj4v\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.117877 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlkw\" (UniqueName: \"kubernetes.io/projected/1c73d148-e9e6-4b12-b90e-fcdebc0c97f0-kube-api-access-9dlkw\") pod \"nmstate-handler-wg6lb\" (UID: \"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0\") " pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.124476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzbnm\" (UniqueName: \"kubernetes.io/projected/0d538149-e77a-4bda-8a62-ef47cfc27f04-kube-api-access-bzbnm\") pod \"nmstate-metrics-9b8c8685d-nk4bz\" (UID: \"0d538149-e77a-4bda-8a62-ef47cfc27f04\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.164791 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.193766 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.195709 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-948c4f479-hd4cp"] Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.199364 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.205719 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-948c4f479-hd4cp"] Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.219339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.392611 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-oauth-serving-cert\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.393008 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-serving-cert\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.393029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-service-ca\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.393048 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-oauth-config\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.393080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-kube-api-access-8m6pj\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.393139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-trusted-ca-bundle\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.393158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-config\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.402685 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz"] Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.443694 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mhmks"] Mar 12 15:01:23 crc kubenswrapper[4832]: W0312 15:01:23.447186 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f4d3bf_8464_490f_9874_1442a1e08a2c.slice/crio-a32e7b5c385c7f98bb71145a22384152589ef71bef1f1d5da2e2d178f7294e1e WatchSource:0}: Error finding container a32e7b5c385c7f98bb71145a22384152589ef71bef1f1d5da2e2d178f7294e1e: Status 404 returned error can't find the container with id a32e7b5c385c7f98bb71145a22384152589ef71bef1f1d5da2e2d178f7294e1e Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.494403 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-kube-api-access-8m6pj\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.494457 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-trusted-ca-bundle\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.494489 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-config\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.494548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-oauth-serving-cert\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.494619 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-serving-cert\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.494646 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-service-ca\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.494667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-oauth-config\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.495575 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-config\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.495705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-trusted-ca-bundle\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.495788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-oauth-serving-cert\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.496383 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-service-ca\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.499526 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-oauth-config\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.499985 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-console-serving-cert\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.510996 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9-kube-api-access-8m6pj\") pod \"console-948c4f479-hd4cp\" (UID: \"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9\") " pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.595274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f01f5a7-2adf-424c-9302-a8469626d969-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.598632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f01f5a7-2adf-424c-9302-a8469626d969-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-2s9jv\" (UID: \"3f01f5a7-2adf-424c-9302-a8469626d969\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.599145 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.616645 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.725182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" event={"ID":"0d538149-e77a-4bda-8a62-ef47cfc27f04","Type":"ContainerStarted","Data":"d6798884d6d157da44ff816302f325b367dc480e07779a0b51716406a094ea75"} Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.726176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wg6lb" event={"ID":"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0","Type":"ContainerStarted","Data":"dc9f3d6da2ac181244ab4dd758fac6b783166701e411a3d3b367034e0dd67378"} Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.727193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" event={"ID":"d5f4d3bf-8464-490f-9874-1442a1e08a2c","Type":"ContainerStarted","Data":"a32e7b5c385c7f98bb71145a22384152589ef71bef1f1d5da2e2d178f7294e1e"} Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.808543 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-948c4f479-hd4cp"] Mar 12 15:01:23 crc kubenswrapper[4832]: W0312 15:01:23.814370 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75fcb1d9_3eec_43c6_b4bd_7d8cd66a7ab9.slice/crio-bbe8429ac0157cf372b0b45966dc15abe01799153fa75f3f1b6a3d3415022ff3 WatchSource:0}: Error finding container bbe8429ac0157cf372b0b45966dc15abe01799153fa75f3f1b6a3d3415022ff3: Status 404 returned error can't find the container with id bbe8429ac0157cf372b0b45966dc15abe01799153fa75f3f1b6a3d3415022ff3 Mar 12 15:01:23 crc kubenswrapper[4832]: I0312 15:01:23.853992 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv"] Mar 12 15:01:23 crc kubenswrapper[4832]: W0312 15:01:23.859195 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f01f5a7_2adf_424c_9302_a8469626d969.slice/crio-7b8117f75aad7f58380c372ceee49d88318fd0e184d1d78a5bda4ca5cc312469 WatchSource:0}: Error finding container 7b8117f75aad7f58380c372ceee49d88318fd0e184d1d78a5bda4ca5cc312469: Status 404 returned error can't find the container with id 7b8117f75aad7f58380c372ceee49d88318fd0e184d1d78a5bda4ca5cc312469 Mar 12 15:01:24 crc kubenswrapper[4832]: I0312 15:01:24.735973 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" event={"ID":"3f01f5a7-2adf-424c-9302-a8469626d969","Type":"ContainerStarted","Data":"7b8117f75aad7f58380c372ceee49d88318fd0e184d1d78a5bda4ca5cc312469"} Mar 12 15:01:24 crc kubenswrapper[4832]: I0312 15:01:24.737593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-948c4f479-hd4cp" event={"ID":"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9","Type":"ContainerStarted","Data":"8fc332496b975e50059606acfe3de3700f2fd340f3819b47d6707d5af3fe59c0"} Mar 12 15:01:24 crc kubenswrapper[4832]: I0312 15:01:24.737641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-948c4f479-hd4cp" event={"ID":"75fcb1d9-3eec-43c6-b4bd-7d8cd66a7ab9","Type":"ContainerStarted","Data":"bbe8429ac0157cf372b0b45966dc15abe01799153fa75f3f1b6a3d3415022ff3"} Mar 12 15:01:24 crc kubenswrapper[4832]: I0312 15:01:24.779951 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-948c4f479-hd4cp" podStartSLOduration=1.7799255729999999 podStartE2EDuration="1.779925573s" podCreationTimestamp="2026-03-12 15:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:01:24.771164058 +0000 UTC m=+843.415178304" watchObservedRunningTime="2026-03-12 15:01:24.779925573 +0000 UTC m=+843.423939839" Mar 12 15:01:26 crc kubenswrapper[4832]: I0312 15:01:26.313841 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:01:26 crc kubenswrapper[4832]: I0312 15:01:26.313906 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:01:26 crc kubenswrapper[4832]: I0312 15:01:26.313966 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:01:26 crc kubenswrapper[4832]: I0312 15:01:26.314585 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb2ae425dc6888cba35a19e88d8e8d25fe43467e7d813212653d18ffcd00311f"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:01:26 crc kubenswrapper[4832]: I0312 15:01:26.314641 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://cb2ae425dc6888cba35a19e88d8e8d25fe43467e7d813212653d18ffcd00311f" gracePeriod=600 Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.772063 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="cb2ae425dc6888cba35a19e88d8e8d25fe43467e7d813212653d18ffcd00311f" exitCode=0 Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.772112 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"cb2ae425dc6888cba35a19e88d8e8d25fe43467e7d813212653d18ffcd00311f"} Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.772628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"06828ac09576ddc400ed9fc2eeb342e216438273fb168f6b943b24fb1b40966f"} Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.772661 4832 scope.go:117] "RemoveContainer" containerID="a6b3b7a564b25e88f4a641ce57b7d20bb5eb79ceb00743653cfa2463d685ad6c" Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.775013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" event={"ID":"0d538149-e77a-4bda-8a62-ef47cfc27f04","Type":"ContainerStarted","Data":"5d7561eb5cec2e46c350aaf606b4b4b6e2cebff19359fe0641feb0834182fd70"} Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.776996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" event={"ID":"3f01f5a7-2adf-424c-9302-a8469626d969","Type":"ContainerStarted","Data":"6a9011c502db3680d28f08d7b28fe2ab83d0847e99e322e817580b71ae8d5081"} Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.781028 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wg6lb" event={"ID":"1c73d148-e9e6-4b12-b90e-fcdebc0c97f0","Type":"ContainerStarted","Data":"26754d94f5573c5516d3ab69e1c0464865c42b4e56ad2b47ab04b9164589003a"} Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.781482 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.783111 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" event={"ID":"d5f4d3bf-8464-490f-9874-1442a1e08a2c","Type":"ContainerStarted","Data":"22db7123a8a39eb3b327211f0e51fcfdb5ef2dec3277b28e34fc35f0ff36e8f7"} Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.783253 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.827761 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" podStartSLOduration=2.978552109 podStartE2EDuration="5.827739944s" podCreationTimestamp="2026-03-12 15:01:22 +0000 UTC" firstStartedPulling="2026-03-12 15:01:23.449397516 +0000 UTC m=+842.093411742" lastFinishedPulling="2026-03-12 15:01:26.298585361 +0000 UTC m=+844.942599577" observedRunningTime="2026-03-12 15:01:27.821135981 +0000 UTC m=+846.465150247" watchObservedRunningTime="2026-03-12 15:01:27.827739944 +0000 UTC m=+846.471754170" Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.847181 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wg6lb" podStartSLOduration=2.797012712 podStartE2EDuration="5.84716166s" podCreationTimestamp="2026-03-12 15:01:22 +0000 UTC" firstStartedPulling="2026-03-12 15:01:23.263201193 +0000 UTC m=+841.907215419" lastFinishedPulling="2026-03-12 15:01:26.313350141 +0000 UTC m=+844.957364367" observedRunningTime="2026-03-12 15:01:27.840897337 +0000 UTC m=+846.484911573" watchObservedRunningTime="2026-03-12 15:01:27.84716166 +0000 UTC m=+846.491175896" Mar 12 15:01:27 crc kubenswrapper[4832]: I0312 15:01:27.867829 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2s9jv" podStartSLOduration=2.397048684 podStartE2EDuration="5.867811961s" podCreationTimestamp="2026-03-12 15:01:22 +0000 UTC" firstStartedPulling="2026-03-12 15:01:23.861879915 +0000 UTC m=+842.505894181" lastFinishedPulling="2026-03-12 15:01:27.332643232 +0000 UTC m=+845.976657458" observedRunningTime="2026-03-12 15:01:27.865237166 +0000 UTC m=+846.509251392" watchObservedRunningTime="2026-03-12 15:01:27.867811961 +0000 UTC m=+846.511826187" Mar 12 15:01:29 crc kubenswrapper[4832]: I0312 15:01:29.164856 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:29 crc kubenswrapper[4832]: I0312 15:01:29.164910 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:29 crc kubenswrapper[4832]: I0312 15:01:29.206036 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:29 crc kubenswrapper[4832]: I0312 15:01:29.853190 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8q7d" Mar 12 15:01:29 crc kubenswrapper[4832]: I0312 15:01:29.960665 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8q7d"] Mar 12 15:01:30 crc kubenswrapper[4832]: I0312 15:01:30.017975 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdms9"] Mar 12 15:01:30 crc kubenswrapper[4832]: I0312 15:01:30.018310 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qdms9" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="registry-server" containerID="cri-o://ff1592e00a63219bbaf6dd2453513c756d529e9652c4c66c1961f1133477dd5e" gracePeriod=2 Mar 12 15:01:30 crc kubenswrapper[4832]: I0312 15:01:30.814787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" event={"ID":"0d538149-e77a-4bda-8a62-ef47cfc27f04","Type":"ContainerStarted","Data":"b6a4c516ca61ada54cc1034ec3c2ff87ee37d5e8850600e66e58d51a3b83b8d3"} Mar 12 15:01:30 crc kubenswrapper[4832]: I0312 15:01:30.819607 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerID="ff1592e00a63219bbaf6dd2453513c756d529e9652c4c66c1961f1133477dd5e" exitCode=0 Mar 12 15:01:30 crc kubenswrapper[4832]: I0312 15:01:30.819739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdms9" event={"ID":"ce99647c-4c5e-4b42-99cc-814f7db9212b","Type":"ContainerDied","Data":"ff1592e00a63219bbaf6dd2453513c756d529e9652c4c66c1961f1133477dd5e"} Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.535290 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.719343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fsn4\" (UniqueName: \"kubernetes.io/projected/ce99647c-4c5e-4b42-99cc-814f7db9212b-kube-api-access-2fsn4\") pod \"ce99647c-4c5e-4b42-99cc-814f7db9212b\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.719484 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-utilities\") pod \"ce99647c-4c5e-4b42-99cc-814f7db9212b\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.720192 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-utilities" (OuterVolumeSpecName: "utilities") pod "ce99647c-4c5e-4b42-99cc-814f7db9212b" (UID: "ce99647c-4c5e-4b42-99cc-814f7db9212b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.720255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-catalog-content\") pod \"ce99647c-4c5e-4b42-99cc-814f7db9212b\" (UID: \"ce99647c-4c5e-4b42-99cc-814f7db9212b\") " Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.721691 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.724132 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce99647c-4c5e-4b42-99cc-814f7db9212b-kube-api-access-2fsn4" (OuterVolumeSpecName: "kube-api-access-2fsn4") pod "ce99647c-4c5e-4b42-99cc-814f7db9212b" (UID: "ce99647c-4c5e-4b42-99cc-814f7db9212b"). InnerVolumeSpecName "kube-api-access-2fsn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.822370 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fsn4\" (UniqueName: \"kubernetes.io/projected/ce99647c-4c5e-4b42-99cc-814f7db9212b-kube-api-access-2fsn4\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.832171 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdms9" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.832428 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdms9" event={"ID":"ce99647c-4c5e-4b42-99cc-814f7db9212b","Type":"ContainerDied","Data":"e73abc5e010470ecb6742f831648eca178c8a49a905042d5ba3d06424c599741"} Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.832489 4832 scope.go:117] "RemoveContainer" containerID="ff1592e00a63219bbaf6dd2453513c756d529e9652c4c66c1961f1133477dd5e" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.853026 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nk4bz" podStartSLOduration=3.993893639 podStartE2EDuration="9.853009258s" podCreationTimestamp="2026-03-12 15:01:22 +0000 UTC" firstStartedPulling="2026-03-12 15:01:23.410792264 +0000 UTC m=+842.054806490" lastFinishedPulling="2026-03-12 15:01:29.269907883 +0000 UTC m=+847.913922109" observedRunningTime="2026-03-12 15:01:31.850546306 +0000 UTC m=+850.494560522" watchObservedRunningTime="2026-03-12 15:01:31.853009258 +0000 UTC m=+850.497023484" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.853842 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce99647c-4c5e-4b42-99cc-814f7db9212b" (UID: "ce99647c-4c5e-4b42-99cc-814f7db9212b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.860708 4832 scope.go:117] "RemoveContainer" containerID="7e2dc7eb452ae6840086071cbe157da12b7d5be43f5d096dc071bb47160e7965" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.884307 4832 scope.go:117] "RemoveContainer" containerID="afecced00b0206bc256623e9272867721a2f78947950d7cfc924672bd6c01458" Mar 12 15:01:31 crc kubenswrapper[4832]: I0312 15:01:31.923010 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce99647c-4c5e-4b42-99cc-814f7db9212b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:32 crc kubenswrapper[4832]: I0312 15:01:32.157087 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdms9"] Mar 12 15:01:32 crc kubenswrapper[4832]: I0312 15:01:32.166969 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qdms9"] Mar 12 15:01:32 crc kubenswrapper[4832]: I0312 15:01:32.626801 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" path="/var/lib/kubelet/pods/ce99647c-4c5e-4b42-99cc-814f7db9212b/volumes" Mar 12 15:01:33 crc kubenswrapper[4832]: I0312 15:01:33.248099 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wg6lb" Mar 12 15:01:33 crc kubenswrapper[4832]: I0312 15:01:33.617163 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:33 crc kubenswrapper[4832]: I0312 15:01:33.617224 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:33 crc kubenswrapper[4832]: I0312 15:01:33.621407 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:33 crc kubenswrapper[4832]: I0312 15:01:33.855910 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-948c4f479-hd4cp" Mar 12 15:01:33 crc kubenswrapper[4832]: I0312 15:01:33.915679 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-42j9g"] Mar 12 15:01:43 crc kubenswrapper[4832]: I0312 15:01:43.204063 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mhmks" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.921670 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks"] Mar 12 15:01:54 crc kubenswrapper[4832]: E0312 15:01:54.922465 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="extract-content" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.922482 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="extract-content" Mar 12 15:01:54 crc kubenswrapper[4832]: E0312 15:01:54.922716 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="extract-utilities" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.922728 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="extract-utilities" Mar 12 15:01:54 crc kubenswrapper[4832]: E0312 15:01:54.922741 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="registry-server" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.922747 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="registry-server" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.922900 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce99647c-4c5e-4b42-99cc-814f7db9212b" containerName="registry-server" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.923954 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.927144 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 15:01:54 crc kubenswrapper[4832]: I0312 15:01:54.928932 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks"] Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.037232 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tn92\" (UniqueName: \"kubernetes.io/projected/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-kube-api-access-6tn92\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.037302 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.037358 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.138734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tn92\" (UniqueName: \"kubernetes.io/projected/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-kube-api-access-6tn92\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.138822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.138864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.139643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.140999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.171611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tn92\" (UniqueName: \"kubernetes.io/projected/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-kube-api-access-6tn92\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.244054 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.673319 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks"] Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.997024 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerID="a49cb09061010c481f08ec29b4bc4a4ed1a5e3c5c44e9df4b399fe545c6a8fa8" exitCode=0 Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.997066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" event={"ID":"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020","Type":"ContainerDied","Data":"a49cb09061010c481f08ec29b4bc4a4ed1a5e3c5c44e9df4b399fe545c6a8fa8"} Mar 12 15:01:55 crc kubenswrapper[4832]: I0312 15:01:55.997286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" event={"ID":"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020","Type":"ContainerStarted","Data":"9f9e75ceeaab909f5142f8ccc1f80d4b9e742251d67e5196e5cec1b8e5b4283f"} Mar 12 15:01:58 crc kubenswrapper[4832]: I0312 15:01:58.961862 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-42j9g" podUID="82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" containerName="console" containerID="cri-o://44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199" gracePeriod=15 Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.305959 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-42j9g_82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06/console/0.log" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.306031 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.410617 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-trusted-ca-bundle\") pod \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.410685 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-config\") pod \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.410725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-oauth-config\") pod \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.410751 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-oauth-serving-cert\") pod \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.410768 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-serving-cert\") pod \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.410819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8pkh\" (UniqueName: \"kubernetes.io/projected/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-kube-api-access-x8pkh\") pod \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.410894 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-service-ca\") pod \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\" (UID: \"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06\") " Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.411650 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" (UID: "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.411667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-service-ca" (OuterVolumeSpecName: "service-ca") pod "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" (UID: "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.411955 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-config" (OuterVolumeSpecName: "console-config") pod "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" (UID: "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.412437 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" (UID: "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.416309 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" (UID: "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.416543 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-kube-api-access-x8pkh" (OuterVolumeSpecName: "kube-api-access-x8pkh") pod "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" (UID: "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06"). InnerVolumeSpecName "kube-api-access-x8pkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.416948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" (UID: "82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.512637 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.512670 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.512682 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.512690 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.512699 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.512707 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:59 crc kubenswrapper[4832]: I0312 15:01:59.512716 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8pkh\" (UniqueName: \"kubernetes.io/projected/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06-kube-api-access-x8pkh\") on node \"crc\" DevicePath \"\"" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.026105 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-42j9g_82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06/console/0.log" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.026156 4832 generic.go:334] "Generic (PLEG): container finished" podID="82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" containerID="44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199" exitCode=2 Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.026186 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-42j9g" event={"ID":"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06","Type":"ContainerDied","Data":"44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199"} Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.026214 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-42j9g" event={"ID":"82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06","Type":"ContainerDied","Data":"cd4276755d6479f784ca5f8b8ffac1b8497cc93ee54153e41cc18fd9a5bcf766"} Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.026232 4832 scope.go:117] "RemoveContainer" containerID="44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.026234 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-42j9g" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.045877 4832 scope.go:117] "RemoveContainer" containerID="44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199" Mar 12 15:02:00 crc kubenswrapper[4832]: E0312 15:02:00.046759 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199\": container with ID starting with 44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199 not found: ID does not exist" containerID="44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.046825 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199"} err="failed to get container status \"44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199\": rpc error: code = NotFound desc = could not find container \"44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199\": container with ID starting with 44bb98e1bd9057c73143d286bec50d90408b4c12a5a16c8cc7e60f78cc466199 not found: ID does not exist" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.058817 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-42j9g"] Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.064989 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-42j9g"] Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.136394 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555462-wmmjh"] Mar 12 15:02:00 crc kubenswrapper[4832]: E0312 15:02:00.136861 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" containerName="console" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.136944 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" containerName="console" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.137175 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" containerName="console" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.137758 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-wmmjh" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.141108 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.141408 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.141801 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.148327 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-wmmjh"] Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.323399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqjd\" (UniqueName: \"kubernetes.io/projected/b40d15e5-51c1-4be3-b8c1-6b92290aa59d-kube-api-access-txqjd\") pod \"auto-csr-approver-29555462-wmmjh\" (UID: \"b40d15e5-51c1-4be3-b8c1-6b92290aa59d\") " pod="openshift-infra/auto-csr-approver-29555462-wmmjh" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.424236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqjd\" (UniqueName: \"kubernetes.io/projected/b40d15e5-51c1-4be3-b8c1-6b92290aa59d-kube-api-access-txqjd\") pod \"auto-csr-approver-29555462-wmmjh\" (UID: \"b40d15e5-51c1-4be3-b8c1-6b92290aa59d\") " pod="openshift-infra/auto-csr-approver-29555462-wmmjh" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.452381 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqjd\" (UniqueName: \"kubernetes.io/projected/b40d15e5-51c1-4be3-b8c1-6b92290aa59d-kube-api-access-txqjd\") pod \"auto-csr-approver-29555462-wmmjh\" (UID: \"b40d15e5-51c1-4be3-b8c1-6b92290aa59d\") " pod="openshift-infra/auto-csr-approver-29555462-wmmjh" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.466712 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-wmmjh" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.627224 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06" path="/var/lib/kubelet/pods/82bfa6e2-c2b7-4bd5-a03d-0e0c30008b06/volumes" Mar 12 15:02:00 crc kubenswrapper[4832]: I0312 15:02:00.689517 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-wmmjh"] Mar 12 15:02:01 crc kubenswrapper[4832]: I0312 15:02:01.033257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555462-wmmjh" event={"ID":"b40d15e5-51c1-4be3-b8c1-6b92290aa59d","Type":"ContainerStarted","Data":"fd5e303dee8348c907030f7aaa0e75606ed21195adbe10530606cad3788e9662"} Mar 12 15:02:03 crc kubenswrapper[4832]: I0312 15:02:03.048886 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerID="b5b5da0c6532a7ea5f2a876a2b6959576d018be8f7f10653da22c69e72e356c6" exitCode=0 Mar 12 15:02:03 crc kubenswrapper[4832]: I0312 15:02:03.048954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" event={"ID":"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020","Type":"ContainerDied","Data":"b5b5da0c6532a7ea5f2a876a2b6959576d018be8f7f10653da22c69e72e356c6"} Mar 12 15:02:03 crc kubenswrapper[4832]: I0312 15:02:03.052319 4832 generic.go:334] "Generic (PLEG): container finished" podID="b40d15e5-51c1-4be3-b8c1-6b92290aa59d" containerID="d4746c978b13da39a08686d05c6de4035f325bfd76850bc79cfdd294b83d442a" exitCode=0 Mar 12 15:02:03 crc kubenswrapper[4832]: I0312 15:02:03.052364 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555462-wmmjh" event={"ID":"b40d15e5-51c1-4be3-b8c1-6b92290aa59d","Type":"ContainerDied","Data":"d4746c978b13da39a08686d05c6de4035f325bfd76850bc79cfdd294b83d442a"} Mar 12 15:02:04 crc kubenswrapper[4832]: I0312 15:02:04.061474 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerID="87946589e0bc933824411fb97432b717e34cfb4bdf5ab04900d95c07ee143eca" exitCode=0 Mar 12 15:02:04 crc kubenswrapper[4832]: I0312 15:02:04.061532 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" event={"ID":"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020","Type":"ContainerDied","Data":"87946589e0bc933824411fb97432b717e34cfb4bdf5ab04900d95c07ee143eca"} Mar 12 15:02:04 crc kubenswrapper[4832]: I0312 15:02:04.306264 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-wmmjh" Mar 12 15:02:04 crc kubenswrapper[4832]: I0312 15:02:04.482015 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqjd\" (UniqueName: \"kubernetes.io/projected/b40d15e5-51c1-4be3-b8c1-6b92290aa59d-kube-api-access-txqjd\") pod \"b40d15e5-51c1-4be3-b8c1-6b92290aa59d\" (UID: \"b40d15e5-51c1-4be3-b8c1-6b92290aa59d\") " Mar 12 15:02:04 crc kubenswrapper[4832]: I0312 15:02:04.487727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40d15e5-51c1-4be3-b8c1-6b92290aa59d-kube-api-access-txqjd" (OuterVolumeSpecName: "kube-api-access-txqjd") pod "b40d15e5-51c1-4be3-b8c1-6b92290aa59d" (UID: "b40d15e5-51c1-4be3-b8c1-6b92290aa59d"). InnerVolumeSpecName "kube-api-access-txqjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:02:04 crc kubenswrapper[4832]: I0312 15:02:04.583132 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqjd\" (UniqueName: \"kubernetes.io/projected/b40d15e5-51c1-4be3-b8c1-6b92290aa59d-kube-api-access-txqjd\") on node \"crc\" DevicePath \"\"" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.069372 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-wmmjh" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.069361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555462-wmmjh" event={"ID":"b40d15e5-51c1-4be3-b8c1-6b92290aa59d","Type":"ContainerDied","Data":"fd5e303dee8348c907030f7aaa0e75606ed21195adbe10530606cad3788e9662"} Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.069892 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5e303dee8348c907030f7aaa0e75606ed21195adbe10530606cad3788e9662" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.323811 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.359272 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-ccz8p"] Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.363907 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-ccz8p"] Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.496203 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tn92\" (UniqueName: \"kubernetes.io/projected/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-kube-api-access-6tn92\") pod \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.496308 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-bundle\") pod \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.496347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-util\") pod \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\" (UID: \"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020\") " Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.498030 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-bundle" (OuterVolumeSpecName: "bundle") pod "eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" (UID: "eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.500613 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-kube-api-access-6tn92" (OuterVolumeSpecName: "kube-api-access-6tn92") pod "eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" (UID: "eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020"). InnerVolumeSpecName "kube-api-access-6tn92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.510663 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-util" (OuterVolumeSpecName: "util") pod "eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" (UID: "eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.597667 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.597976 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-util\") on node \"crc\" DevicePath \"\"" Mar 12 15:02:05 crc kubenswrapper[4832]: I0312 15:02:05.597997 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tn92\" (UniqueName: \"kubernetes.io/projected/eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020-kube-api-access-6tn92\") on node \"crc\" DevicePath \"\"" Mar 12 15:02:06 crc kubenswrapper[4832]: I0312 15:02:06.076626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" event={"ID":"eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020","Type":"ContainerDied","Data":"9f9e75ceeaab909f5142f8ccc1f80d4b9e742251d67e5196e5cec1b8e5b4283f"} Mar 12 15:02:06 crc kubenswrapper[4832]: I0312 15:02:06.076718 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f9e75ceeaab909f5142f8ccc1f80d4b9e742251d67e5196e5cec1b8e5b4283f" Mar 12 15:02:06 crc kubenswrapper[4832]: I0312 15:02:06.076720 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks" Mar 12 15:02:06 crc kubenswrapper[4832]: I0312 15:02:06.632741 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0975370-5c54-4fb7-a1ce-f032ad5085c2" path="/var/lib/kubelet/pods/b0975370-5c54-4fb7-a1ce-f032ad5085c2/volumes" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.743271 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh"] Mar 12 15:02:18 crc kubenswrapper[4832]: E0312 15:02:18.744172 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40d15e5-51c1-4be3-b8c1-6b92290aa59d" containerName="oc" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.744192 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40d15e5-51c1-4be3-b8c1-6b92290aa59d" containerName="oc" Mar 12 15:02:18 crc kubenswrapper[4832]: E0312 15:02:18.744212 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerName="pull" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.744222 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerName="pull" Mar 12 15:02:18 crc kubenswrapper[4832]: E0312 15:02:18.744248 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerName="util" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.744261 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerName="util" Mar 12 15:02:18 crc kubenswrapper[4832]: E0312 15:02:18.744286 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerName="extract" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.744298 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerName="extract" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.744453 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020" containerName="extract" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.744467 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40d15e5-51c1-4be3-b8c1-6b92290aa59d" containerName="oc" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.745065 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.746758 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.747009 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.747042 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.747124 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.747156 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-p8t52" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.763169 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh"] Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.792773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9sm\" (UniqueName: \"kubernetes.io/projected/e22ca826-f71d-4391-a004-03da3653e5d0-kube-api-access-mk9sm\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.792858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e22ca826-f71d-4391-a004-03da3653e5d0-webhook-cert\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.792883 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e22ca826-f71d-4391-a004-03da3653e5d0-apiservice-cert\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.893582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e22ca826-f71d-4391-a004-03da3653e5d0-webhook-cert\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.893622 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e22ca826-f71d-4391-a004-03da3653e5d0-apiservice-cert\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.893668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk9sm\" (UniqueName: \"kubernetes.io/projected/e22ca826-f71d-4391-a004-03da3653e5d0-kube-api-access-mk9sm\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.899374 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e22ca826-f71d-4391-a004-03da3653e5d0-webhook-cert\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.902041 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e22ca826-f71d-4391-a004-03da3653e5d0-apiservice-cert\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.912148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk9sm\" (UniqueName: \"kubernetes.io/projected/e22ca826-f71d-4391-a004-03da3653e5d0-kube-api-access-mk9sm\") pod \"metallb-operator-controller-manager-866fc7dbb5-9m9sh\" (UID: \"e22ca826-f71d-4391-a004-03da3653e5d0\") " pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.976124 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l"] Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.976787 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.983120 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lc65g" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.983396 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.983729 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 12 15:02:18 crc kubenswrapper[4832]: I0312 15:02:18.998126 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l"] Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.070390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.095840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62554f83-27bb-4e87-941a-f6bdff2d9f99-apiservice-cert\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.095913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62554f83-27bb-4e87-941a-f6bdff2d9f99-webhook-cert\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.095939 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlsdv\" (UniqueName: \"kubernetes.io/projected/62554f83-27bb-4e87-941a-f6bdff2d9f99-kube-api-access-jlsdv\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.198866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62554f83-27bb-4e87-941a-f6bdff2d9f99-apiservice-cert\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.199253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62554f83-27bb-4e87-941a-f6bdff2d9f99-webhook-cert\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.199303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlsdv\" (UniqueName: \"kubernetes.io/projected/62554f83-27bb-4e87-941a-f6bdff2d9f99-kube-api-access-jlsdv\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.214660 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62554f83-27bb-4e87-941a-f6bdff2d9f99-webhook-cert\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.215184 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62554f83-27bb-4e87-941a-f6bdff2d9f99-apiservice-cert\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.230187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlsdv\" (UniqueName: \"kubernetes.io/projected/62554f83-27bb-4e87-941a-f6bdff2d9f99-kube-api-access-jlsdv\") pod \"metallb-operator-webhook-server-79f9854454-ngt5l\" (UID: \"62554f83-27bb-4e87-941a-f6bdff2d9f99\") " pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.297916 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.398054 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh"] Mar 12 15:02:19 crc kubenswrapper[4832]: I0312 15:02:19.548469 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l"] Mar 12 15:02:19 crc kubenswrapper[4832]: W0312 15:02:19.558219 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62554f83_27bb_4e87_941a_f6bdff2d9f99.slice/crio-abb14975c483ff8a56190a93a7ab3dc964ab292dadd500290e34d00216ae8a56 WatchSource:0}: Error finding container abb14975c483ff8a56190a93a7ab3dc964ab292dadd500290e34d00216ae8a56: Status 404 returned error can't find the container with id abb14975c483ff8a56190a93a7ab3dc964ab292dadd500290e34d00216ae8a56 Mar 12 15:02:20 crc kubenswrapper[4832]: I0312 15:02:20.149858 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" event={"ID":"62554f83-27bb-4e87-941a-f6bdff2d9f99","Type":"ContainerStarted","Data":"abb14975c483ff8a56190a93a7ab3dc964ab292dadd500290e34d00216ae8a56"} Mar 12 15:02:20 crc kubenswrapper[4832]: I0312 15:02:20.151124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" event={"ID":"e22ca826-f71d-4391-a004-03da3653e5d0","Type":"ContainerStarted","Data":"c652e3e4b2d916ca439336efeac486cbd4ec0a3132b166d8f55d0c12c032e98f"} Mar 12 15:02:24 crc kubenswrapper[4832]: I0312 15:02:24.172588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" event={"ID":"62554f83-27bb-4e87-941a-f6bdff2d9f99","Type":"ContainerStarted","Data":"3c33d17d1e3cb1535f8a309feb3fda2dcbb91c5a567c12a2f25bfb4f7f5c0896"} Mar 12 15:02:24 crc kubenswrapper[4832]: I0312 15:02:24.173011 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:24 crc kubenswrapper[4832]: I0312 15:02:24.174657 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" event={"ID":"e22ca826-f71d-4391-a004-03da3653e5d0","Type":"ContainerStarted","Data":"ad6e49670cba92e0deac5b66121b7d80b80ce4f959d8ae03c54da96e0dcb0a72"} Mar 12 15:02:24 crc kubenswrapper[4832]: I0312 15:02:24.174805 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:24 crc kubenswrapper[4832]: I0312 15:02:24.191681 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" podStartSLOduration=1.841663038 podStartE2EDuration="6.19166499s" podCreationTimestamp="2026-03-12 15:02:18 +0000 UTC" firstStartedPulling="2026-03-12 15:02:19.560818627 +0000 UTC m=+898.204832853" lastFinishedPulling="2026-03-12 15:02:23.910820579 +0000 UTC m=+902.554834805" observedRunningTime="2026-03-12 15:02:24.187806918 +0000 UTC m=+902.831821144" watchObservedRunningTime="2026-03-12 15:02:24.19166499 +0000 UTC m=+902.835679216" Mar 12 15:02:24 crc kubenswrapper[4832]: I0312 15:02:24.209796 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" podStartSLOduration=1.889529322 podStartE2EDuration="6.209780228s" podCreationTimestamp="2026-03-12 15:02:18 +0000 UTC" firstStartedPulling="2026-03-12 15:02:19.422075226 +0000 UTC m=+898.066089452" lastFinishedPulling="2026-03-12 15:02:23.742326132 +0000 UTC m=+902.386340358" observedRunningTime="2026-03-12 15:02:24.207775049 +0000 UTC m=+902.851789285" watchObservedRunningTime="2026-03-12 15:02:24.209780228 +0000 UTC m=+902.853794454" Mar 12 15:02:39 crc kubenswrapper[4832]: I0312 15:02:39.303626 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79f9854454-ngt5l" Mar 12 15:02:41 crc kubenswrapper[4832]: I0312 15:02:41.414979 4832 scope.go:117] "RemoveContainer" containerID="71e5cb51397ab4f5704e9b39092720b500b0a02f7521b9d447435dff3e9844fe" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.073629 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-866fc7dbb5-9m9sh" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.819214 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-q2scw"] Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.824106 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.826658 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ls7fk" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.826911 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.828034 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.841989 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-conf\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.842192 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-sockets\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.842247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-metrics-certs\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.842271 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-reloader\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.842355 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snr5s\" (UniqueName: \"kubernetes.io/projected/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-kube-api-access-snr5s\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.842528 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-metrics\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.842677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-startup\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.843350 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws"] Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.844327 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.845918 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.856580 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws"] Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.942946 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6t98f"] Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-startup\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-conf\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943752 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-sockets\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943780 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-metrics-certs\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-reloader\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943817 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6t98f" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943839 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b51d1d8-1bda-4c45-9a0e-c712c078112e-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qbvws\" (UID: \"8b51d1d8-1bda-4c45-9a0e-c712c078112e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.943873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snr5s\" (UniqueName: \"kubernetes.io/projected/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-kube-api-access-snr5s\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.944271 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-metrics\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.944308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-conf\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.944339 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxqq8\" (UniqueName: \"kubernetes.io/projected/8b51d1d8-1bda-4c45-9a0e-c712c078112e-kube-api-access-gxqq8\") pod \"frr-k8s-webhook-server-bcc4b6f68-qbvws\" (UID: \"8b51d1d8-1bda-4c45-9a0e-c712c078112e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.944484 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-sockets\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.944551 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-reloader\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.944632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-metrics\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.945346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-frr-startup\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.948885 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.949636 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-metrics-certs\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.964738 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.964939 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v6rbr" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.965075 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.980522 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-whsb9"] Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.981461 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:02:59 crc kubenswrapper[4832]: I0312 15:02:59.985675 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.004910 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-whsb9"] Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.009996 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snr5s\" (UniqueName: \"kubernetes.io/projected/5315b3fe-3cfe-49e0-9965-43192f3b0f9c-kube-api-access-snr5s\") pod \"frr-k8s-q2scw\" (UID: \"5315b3fe-3cfe-49e0-9965-43192f3b0f9c\") " pod="metallb-system/frr-k8s-q2scw" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048484 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b51d1d8-1bda-4c45-9a0e-c712c078112e-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qbvws\" (UID: \"8b51d1d8-1bda-4c45-9a0e-c712c078112e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttkt\" (UniqueName: \"kubernetes.io/projected/e1c64350-6632-4181-9c9a-80eb712e2f00-kube-api-access-kttkt\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxqq8\" (UniqueName: \"kubernetes.io/projected/8b51d1d8-1bda-4c45-9a0e-c712c078112e-kube-api-access-gxqq8\") pod \"frr-k8s-webhook-server-bcc4b6f68-qbvws\" (UID: \"8b51d1d8-1bda-4c45-9a0e-c712c078112e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048601 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9adb53f-cecb-4ad0-b1ec-77504b077006-metrics-certs\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048637 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg4bn\" (UniqueName: \"kubernetes.io/projected/c9adb53f-cecb-4ad0-b1ec-77504b077006-kube-api-access-vg4bn\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048652 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e1c64350-6632-4181-9c9a-80eb712e2f00-metallb-excludel2\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9adb53f-cecb-4ad0-b1ec-77504b077006-cert\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.048708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-metrics-certs\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.048837 4832 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.048909 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b51d1d8-1bda-4c45-9a0e-c712c078112e-cert podName:8b51d1d8-1bda-4c45-9a0e-c712c078112e nodeName:}" failed. No retries permitted until 2026-03-12 15:03:00.548892446 +0000 UTC m=+939.192906662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b51d1d8-1bda-4c45-9a0e-c712c078112e-cert") pod "frr-k8s-webhook-server-bcc4b6f68-qbvws" (UID: "8b51d1d8-1bda-4c45-9a0e-c712c078112e") : secret "frr-k8s-webhook-server-cert" not found Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.072366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxqq8\" (UniqueName: \"kubernetes.io/projected/8b51d1d8-1bda-4c45-9a0e-c712c078112e-kube-api-access-gxqq8\") pod \"frr-k8s-webhook-server-bcc4b6f68-qbvws\" (UID: \"8b51d1d8-1bda-4c45-9a0e-c712c078112e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.146670 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-q2scw" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.149667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kttkt\" (UniqueName: \"kubernetes.io/projected/e1c64350-6632-4181-9c9a-80eb712e2f00-kube-api-access-kttkt\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.149703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9adb53f-cecb-4ad0-b1ec-77504b077006-metrics-certs\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.149755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg4bn\" (UniqueName: \"kubernetes.io/projected/c9adb53f-cecb-4ad0-b1ec-77504b077006-kube-api-access-vg4bn\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.149772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e1c64350-6632-4181-9c9a-80eb712e2f00-metallb-excludel2\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.149799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.149817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9adb53f-cecb-4ad0-b1ec-77504b077006-cert\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.149832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-metrics-certs\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.149930 4832 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.149972 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-metrics-certs podName:e1c64350-6632-4181-9c9a-80eb712e2f00 nodeName:}" failed. No retries permitted until 2026-03-12 15:03:00.64995703 +0000 UTC m=+939.293971256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-metrics-certs") pod "speaker-6t98f" (UID: "e1c64350-6632-4181-9c9a-80eb712e2f00") : secret "speaker-certs-secret" not found Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.150277 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.150357 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist podName:e1c64350-6632-4181-9c9a-80eb712e2f00 nodeName:}" failed. No retries permitted until 2026-03-12 15:03:00.650337111 +0000 UTC m=+939.294351397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist") pod "speaker-6t98f" (UID: "e1c64350-6632-4181-9c9a-80eb712e2f00") : secret "metallb-memberlist" not found Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.151224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e1c64350-6632-4181-9c9a-80eb712e2f00-metallb-excludel2\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.157207 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9adb53f-cecb-4ad0-b1ec-77504b077006-metrics-certs\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.163877 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.169227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg4bn\" (UniqueName: \"kubernetes.io/projected/c9adb53f-cecb-4ad0-b1ec-77504b077006-kube-api-access-vg4bn\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.173485 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9adb53f-cecb-4ad0-b1ec-77504b077006-cert\") pod \"controller-7bb4cc7c98-whsb9\" (UID: \"c9adb53f-cecb-4ad0-b1ec-77504b077006\") " pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.177548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kttkt\" (UniqueName: \"kubernetes.io/projected/e1c64350-6632-4181-9c9a-80eb712e2f00-kube-api-access-kttkt\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.309240 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.388913 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerStarted","Data":"587906ddd1a1f08be0798766975963afcb36274c2edf370901534b86d72528a1"} Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.563468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b51d1d8-1bda-4c45-9a0e-c712c078112e-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qbvws\" (UID: \"8b51d1d8-1bda-4c45-9a0e-c712c078112e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.569230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b51d1d8-1bda-4c45-9a0e-c712c078112e-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qbvws\" (UID: \"8b51d1d8-1bda-4c45-9a0e-c712c078112e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.665018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.665398 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-metrics-certs\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.665168 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 15:03:00 crc kubenswrapper[4832]: E0312 15:03:00.665498 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist podName:e1c64350-6632-4181-9c9a-80eb712e2f00 nodeName:}" failed. No retries permitted until 2026-03-12 15:03:01.665481627 +0000 UTC m=+940.309495853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist") pod "speaker-6t98f" (UID: "e1c64350-6632-4181-9c9a-80eb712e2f00") : secret "metallb-memberlist" not found Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.670223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-metrics-certs\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.679703 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-whsb9"] Mar 12 15:03:00 crc kubenswrapper[4832]: W0312 15:03:00.681780 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9adb53f_cecb_4ad0_b1ec_77504b077006.slice/crio-1aa4678f8d76e25f9aaa8395f9461dfc7218a3878c01d0f703849e8682992f05 WatchSource:0}: Error finding container 1aa4678f8d76e25f9aaa8395f9461dfc7218a3878c01d0f703849e8682992f05: Status 404 returned error can't find the container with id 1aa4678f8d76e25f9aaa8395f9461dfc7218a3878c01d0f703849e8682992f05 Mar 12 15:03:00 crc kubenswrapper[4832]: I0312 15:03:00.755848 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:01 crc kubenswrapper[4832]: I0312 15:03:01.450580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-whsb9" event={"ID":"c9adb53f-cecb-4ad0-b1ec-77504b077006","Type":"ContainerStarted","Data":"4b82c22df97ec5e2f8c21a696a2f5e265e76a6cc111b60c6dcca02c9710d70a4"} Mar 12 15:03:01 crc kubenswrapper[4832]: I0312 15:03:01.450995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-whsb9" event={"ID":"c9adb53f-cecb-4ad0-b1ec-77504b077006","Type":"ContainerStarted","Data":"1aa4678f8d76e25f9aaa8395f9461dfc7218a3878c01d0f703849e8682992f05"} Mar 12 15:03:01 crc kubenswrapper[4832]: I0312 15:03:01.640624 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws"] Mar 12 15:03:01 crc kubenswrapper[4832]: W0312 15:03:01.645766 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b51d1d8_1bda_4c45_9a0e_c712c078112e.slice/crio-1c4243333406973cb8a843646586d7e8a151d70610166d75e08e589a9056a684 WatchSource:0}: Error finding container 1c4243333406973cb8a843646586d7e8a151d70610166d75e08e589a9056a684: Status 404 returned error can't find the container with id 1c4243333406973cb8a843646586d7e8a151d70610166d75e08e589a9056a684 Mar 12 15:03:01 crc kubenswrapper[4832]: I0312 15:03:01.748438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:01 crc kubenswrapper[4832]: I0312 15:03:01.753950 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e1c64350-6632-4181-9c9a-80eb712e2f00-memberlist\") pod \"speaker-6t98f\" (UID: \"e1c64350-6632-4181-9c9a-80eb712e2f00\") " pod="metallb-system/speaker-6t98f" Mar 12 15:03:01 crc kubenswrapper[4832]: I0312 15:03:01.792767 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6t98f" Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.466232 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" event={"ID":"8b51d1d8-1bda-4c45-9a0e-c712c078112e","Type":"ContainerStarted","Data":"1c4243333406973cb8a843646586d7e8a151d70610166d75e08e589a9056a684"} Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.494200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-whsb9" event={"ID":"c9adb53f-cecb-4ad0-b1ec-77504b077006","Type":"ContainerStarted","Data":"be3c0b79fbb3d783b34df6e4d64cbba451989339b00b1b7a8e9d2866605a27ee"} Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.494273 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.499910 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6t98f" event={"ID":"e1c64350-6632-4181-9c9a-80eb712e2f00","Type":"ContainerStarted","Data":"319474e47c37f0b51a6a856aafda47ae937ff30c724f5d145b525176050191c8"} Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.499955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6t98f" event={"ID":"e1c64350-6632-4181-9c9a-80eb712e2f00","Type":"ContainerStarted","Data":"27369b6aff72d338da368becc51b5ead23a9c7a0b64a1b8b7c5346c41b306b86"} Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.499968 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6t98f" event={"ID":"e1c64350-6632-4181-9c9a-80eb712e2f00","Type":"ContainerStarted","Data":"d69b74b3ceb50f37266c167d3e7c697c715fdc894f90f5c2ca65648fd466b6ef"} Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.500613 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6t98f" Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.527546 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-whsb9" podStartSLOduration=3.527526037 podStartE2EDuration="3.527526037s" podCreationTimestamp="2026-03-12 15:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:03:02.518998189 +0000 UTC m=+941.163012415" watchObservedRunningTime="2026-03-12 15:03:02.527526037 +0000 UTC m=+941.171540273" Mar 12 15:03:02 crc kubenswrapper[4832]: I0312 15:03:02.566033 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6t98f" podStartSLOduration=3.566018349 podStartE2EDuration="3.566018349s" podCreationTimestamp="2026-03-12 15:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:03:02.558420427 +0000 UTC m=+941.202434663" watchObservedRunningTime="2026-03-12 15:03:02.566018349 +0000 UTC m=+941.210032575" Mar 12 15:03:10 crc kubenswrapper[4832]: I0312 15:03:10.313758 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-whsb9" Mar 12 15:03:11 crc kubenswrapper[4832]: I0312 15:03:11.599010 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" event={"ID":"8b51d1d8-1bda-4c45-9a0e-c712c078112e","Type":"ContainerStarted","Data":"3eabeb552d0f62a8af039ea45b14b43d3a130d079025c852b75f25e47e641ecd"} Mar 12 15:03:11 crc kubenswrapper[4832]: I0312 15:03:11.599457 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:11 crc kubenswrapper[4832]: I0312 15:03:11.601271 4832 generic.go:334] "Generic (PLEG): container finished" podID="5315b3fe-3cfe-49e0-9965-43192f3b0f9c" containerID="8607b1477a438be41576484cd331c5510052e8643dbded529d8891db56ee3dc6" exitCode=0 Mar 12 15:03:11 crc kubenswrapper[4832]: I0312 15:03:11.601341 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerDied","Data":"8607b1477a438be41576484cd331c5510052e8643dbded529d8891db56ee3dc6"} Mar 12 15:03:11 crc kubenswrapper[4832]: I0312 15:03:11.628635 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" podStartSLOduration=3.3927849820000002 podStartE2EDuration="12.628609926s" podCreationTimestamp="2026-03-12 15:02:59 +0000 UTC" firstStartedPulling="2026-03-12 15:03:01.648376808 +0000 UTC m=+940.292391034" lastFinishedPulling="2026-03-12 15:03:10.884201752 +0000 UTC m=+949.528215978" observedRunningTime="2026-03-12 15:03:11.623551819 +0000 UTC m=+950.267566075" watchObservedRunningTime="2026-03-12 15:03:11.628609926 +0000 UTC m=+950.272624172" Mar 12 15:03:11 crc kubenswrapper[4832]: I0312 15:03:11.799665 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6t98f" Mar 12 15:03:12 crc kubenswrapper[4832]: I0312 15:03:12.624936 4832 generic.go:334] "Generic (PLEG): container finished" podID="5315b3fe-3cfe-49e0-9965-43192f3b0f9c" containerID="07fa60ad26070142912b12b2a91efb665215c93470caae95d79b0d886fc46634" exitCode=0 Mar 12 15:03:12 crc kubenswrapper[4832]: I0312 15:03:12.627454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerDied","Data":"07fa60ad26070142912b12b2a91efb665215c93470caae95d79b0d886fc46634"} Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.633288 4832 generic.go:334] "Generic (PLEG): container finished" podID="5315b3fe-3cfe-49e0-9965-43192f3b0f9c" containerID="74f8f68a3cae0f1982c8ca0401ad72882001288759bb72735f4edc3349ace7b7" exitCode=0 Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.633329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerDied","Data":"74f8f68a3cae0f1982c8ca0401ad72882001288759bb72735f4edc3349ace7b7"} Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.729158 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsw9m"] Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.731313 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.743552 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsw9m"] Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.871571 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhbnm\" (UniqueName: \"kubernetes.io/projected/6eb06dae-7ff2-4e55-9560-231cd2611332-kube-api-access-rhbnm\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.871659 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-utilities\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.871874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-catalog-content\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.973425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-catalog-content\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.973553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhbnm\" (UniqueName: \"kubernetes.io/projected/6eb06dae-7ff2-4e55-9560-231cd2611332-kube-api-access-rhbnm\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.973612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-utilities\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.973802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-catalog-content\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.973968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-utilities\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:13 crc kubenswrapper[4832]: I0312 15:03:13.996397 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhbnm\" (UniqueName: \"kubernetes.io/projected/6eb06dae-7ff2-4e55-9560-231cd2611332-kube-api-access-rhbnm\") pod \"certified-operators-fsw9m\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.094069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.604062 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsw9m"] Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.647074 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerStarted","Data":"ffd4dbf01b5be530cfd550c09875001531ae886aee664750dda5e5b1b6a306bd"} Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.647118 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerStarted","Data":"f4859f386f18d286d632af8296c566117414b1054c4039e80109c086f999b7c3"} Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.647127 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerStarted","Data":"755855ec12d1e71edd0a583c809d95dab13194a484d59fcd56b243ee764e6c37"} Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.647135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerStarted","Data":"11683daac3763423760b7dd2097a5c9f7945fbe77bbd20f53e3c649d6f7bc48c"} Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.647144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerStarted","Data":"01abeaa5423ea08b05c3cfccf19b42ad0a5ed531d56e96dd11a3943367612d6c"} Mar 12 15:03:14 crc kubenswrapper[4832]: I0312 15:03:14.652914 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsw9m" event={"ID":"6eb06dae-7ff2-4e55-9560-231cd2611332","Type":"ContainerStarted","Data":"0aa06ae60417b129af4f62e005181c5e61b048fa29295ad9b3b8d0fe05eb4048"} Mar 12 15:03:15 crc kubenswrapper[4832]: I0312 15:03:15.668220 4832 generic.go:334] "Generic (PLEG): container finished" podID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerID="73da4c4fd80a8ce687b61da5d9629cdf7f5e457ee9b74c63218e77196b08dafa" exitCode=0 Mar 12 15:03:15 crc kubenswrapper[4832]: I0312 15:03:15.668329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsw9m" event={"ID":"6eb06dae-7ff2-4e55-9560-231cd2611332","Type":"ContainerDied","Data":"73da4c4fd80a8ce687b61da5d9629cdf7f5e457ee9b74c63218e77196b08dafa"} Mar 12 15:03:15 crc kubenswrapper[4832]: I0312 15:03:15.682381 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q2scw" event={"ID":"5315b3fe-3cfe-49e0-9965-43192f3b0f9c","Type":"ContainerStarted","Data":"8b8293755b340d5216fc84b452a41860f8fae3acc47a972d2cdc28410c54814a"} Mar 12 15:03:15 crc kubenswrapper[4832]: I0312 15:03:15.683234 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-q2scw" Mar 12 15:03:15 crc kubenswrapper[4832]: I0312 15:03:15.712693 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-q2scw" podStartSLOduration=6.103432173 podStartE2EDuration="16.712676294s" podCreationTimestamp="2026-03-12 15:02:59 +0000 UTC" firstStartedPulling="2026-03-12 15:03:00.283665364 +0000 UTC m=+938.927679590" lastFinishedPulling="2026-03-12 15:03:10.892909485 +0000 UTC m=+949.536923711" observedRunningTime="2026-03-12 15:03:15.711692535 +0000 UTC m=+954.355706761" watchObservedRunningTime="2026-03-12 15:03:15.712676294 +0000 UTC m=+954.356690520" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.525351 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lq2mx"] Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.526862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.528905 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zl6qf" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.529284 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.535296 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.548223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lq2mx"] Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.642241 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnkvk\" (UniqueName: \"kubernetes.io/projected/470013d1-6bd6-4e73-beaf-98535ea56e43-kube-api-access-wnkvk\") pod \"openstack-operator-index-lq2mx\" (UID: \"470013d1-6bd6-4e73-beaf-98535ea56e43\") " pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.743816 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnkvk\" (UniqueName: \"kubernetes.io/projected/470013d1-6bd6-4e73-beaf-98535ea56e43-kube-api-access-wnkvk\") pod \"openstack-operator-index-lq2mx\" (UID: \"470013d1-6bd6-4e73-beaf-98535ea56e43\") " pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.770803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnkvk\" (UniqueName: \"kubernetes.io/projected/470013d1-6bd6-4e73-beaf-98535ea56e43-kube-api-access-wnkvk\") pod \"openstack-operator-index-lq2mx\" (UID: \"470013d1-6bd6-4e73-beaf-98535ea56e43\") " pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:18 crc kubenswrapper[4832]: I0312 15:03:18.858198 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:19 crc kubenswrapper[4832]: I0312 15:03:19.073823 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lq2mx"] Mar 12 15:03:19 crc kubenswrapper[4832]: W0312 15:03:19.081203 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod470013d1_6bd6_4e73_beaf_98535ea56e43.slice/crio-4d2eeeb8ceffdb314ad4485c260761d1510c0de406cc13f2fc38ff34ac66c8ff WatchSource:0}: Error finding container 4d2eeeb8ceffdb314ad4485c260761d1510c0de406cc13f2fc38ff34ac66c8ff: Status 404 returned error can't find the container with id 4d2eeeb8ceffdb314ad4485c260761d1510c0de406cc13f2fc38ff34ac66c8ff Mar 12 15:03:19 crc kubenswrapper[4832]: I0312 15:03:19.737815 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsw9m" event={"ID":"6eb06dae-7ff2-4e55-9560-231cd2611332","Type":"ContainerStarted","Data":"92ad7a33e73014e4e339434de383a660495f3530eeb5e089cce347a623f4c499"} Mar 12 15:03:19 crc kubenswrapper[4832]: I0312 15:03:19.740303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lq2mx" event={"ID":"470013d1-6bd6-4e73-beaf-98535ea56e43","Type":"ContainerStarted","Data":"4d2eeeb8ceffdb314ad4485c260761d1510c0de406cc13f2fc38ff34ac66c8ff"} Mar 12 15:03:20 crc kubenswrapper[4832]: I0312 15:03:20.147518 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-q2scw" Mar 12 15:03:20 crc kubenswrapper[4832]: I0312 15:03:20.181558 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-q2scw" Mar 12 15:03:20 crc kubenswrapper[4832]: I0312 15:03:20.749196 4832 generic.go:334] "Generic (PLEG): container finished" podID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerID="92ad7a33e73014e4e339434de383a660495f3530eeb5e089cce347a623f4c499" exitCode=0 Mar 12 15:03:20 crc kubenswrapper[4832]: I0312 15:03:20.749278 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsw9m" event={"ID":"6eb06dae-7ff2-4e55-9560-231cd2611332","Type":"ContainerDied","Data":"92ad7a33e73014e4e339434de383a660495f3530eeb5e089cce347a623f4c499"} Mar 12 15:03:21 crc kubenswrapper[4832]: I0312 15:03:21.761200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lq2mx" event={"ID":"470013d1-6bd6-4e73-beaf-98535ea56e43","Type":"ContainerStarted","Data":"37892bbafae35bab8693ece4a704079bee7a06340095f9c906d9a7cd2db59386"} Mar 12 15:03:21 crc kubenswrapper[4832]: I0312 15:03:21.765047 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsw9m" event={"ID":"6eb06dae-7ff2-4e55-9560-231cd2611332","Type":"ContainerStarted","Data":"3308ca3e31f80d21a1de70873666e99b61b07bdc8c2aced1be2ee354fc230085"} Mar 12 15:03:21 crc kubenswrapper[4832]: I0312 15:03:21.792695 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lq2mx" podStartSLOduration=1.511341738 podStartE2EDuration="3.792669132s" podCreationTimestamp="2026-03-12 15:03:18 +0000 UTC" firstStartedPulling="2026-03-12 15:03:19.090848149 +0000 UTC m=+957.734862375" lastFinishedPulling="2026-03-12 15:03:21.372175553 +0000 UTC m=+960.016189769" observedRunningTime="2026-03-12 15:03:21.783978258 +0000 UTC m=+960.427992504" watchObservedRunningTime="2026-03-12 15:03:21.792669132 +0000 UTC m=+960.436683368" Mar 12 15:03:21 crc kubenswrapper[4832]: I0312 15:03:21.816900 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsw9m" podStartSLOduration=3.121255446 podStartE2EDuration="8.816875557s" podCreationTimestamp="2026-03-12 15:03:13 +0000 UTC" firstStartedPulling="2026-03-12 15:03:15.673225795 +0000 UTC m=+954.317240031" lastFinishedPulling="2026-03-12 15:03:21.368845916 +0000 UTC m=+960.012860142" observedRunningTime="2026-03-12 15:03:21.810694937 +0000 UTC m=+960.454709183" watchObservedRunningTime="2026-03-12 15:03:21.816875557 +0000 UTC m=+960.460889793" Mar 12 15:03:24 crc kubenswrapper[4832]: I0312 15:03:24.094114 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:24 crc kubenswrapper[4832]: I0312 15:03:24.094571 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:24 crc kubenswrapper[4832]: I0312 15:03:24.156009 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:28 crc kubenswrapper[4832]: I0312 15:03:28.858652 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:28 crc kubenswrapper[4832]: I0312 15:03:28.859268 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:28 crc kubenswrapper[4832]: I0312 15:03:28.899912 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:29 crc kubenswrapper[4832]: I0312 15:03:29.855013 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lq2mx" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.150686 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-q2scw" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.753045 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r"] Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.754425 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.757944 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-trhxg" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.766582 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qbvws" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.774167 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r"] Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.805392 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-util\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.805468 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5p5\" (UniqueName: \"kubernetes.io/projected/9842c418-b1db-4b37-b49a-8e7edbf04777-kube-api-access-cb5p5\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.805514 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-bundle\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.908258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5p5\" (UniqueName: \"kubernetes.io/projected/9842c418-b1db-4b37-b49a-8e7edbf04777-kube-api-access-cb5p5\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.908458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-bundle\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.909797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-util\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.910598 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-util\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.910595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-bundle\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:30 crc kubenswrapper[4832]: I0312 15:03:30.927977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5p5\" (UniqueName: \"kubernetes.io/projected/9842c418-b1db-4b37-b49a-8e7edbf04777-kube-api-access-cb5p5\") pod \"dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:31 crc kubenswrapper[4832]: I0312 15:03:31.077183 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:31 crc kubenswrapper[4832]: I0312 15:03:31.566221 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r"] Mar 12 15:03:31 crc kubenswrapper[4832]: W0312 15:03:31.583420 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9842c418_b1db_4b37_b49a_8e7edbf04777.slice/crio-b4a3b10e5032616bb64b3aa2ace8b8874fa57637a70be3bb18de3906fc1bca1e WatchSource:0}: Error finding container b4a3b10e5032616bb64b3aa2ace8b8874fa57637a70be3bb18de3906fc1bca1e: Status 404 returned error can't find the container with id b4a3b10e5032616bb64b3aa2ace8b8874fa57637a70be3bb18de3906fc1bca1e Mar 12 15:03:31 crc kubenswrapper[4832]: I0312 15:03:31.833282 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" event={"ID":"9842c418-b1db-4b37-b49a-8e7edbf04777","Type":"ContainerStarted","Data":"1f02563506ae3c48a9cba132e8c43c4d118b179666947b83ad409cdf6c534b7b"} Mar 12 15:03:31 crc kubenswrapper[4832]: I0312 15:03:31.833348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" event={"ID":"9842c418-b1db-4b37-b49a-8e7edbf04777","Type":"ContainerStarted","Data":"b4a3b10e5032616bb64b3aa2ace8b8874fa57637a70be3bb18de3906fc1bca1e"} Mar 12 15:03:32 crc kubenswrapper[4832]: I0312 15:03:32.843137 4832 generic.go:334] "Generic (PLEG): container finished" podID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerID="1f02563506ae3c48a9cba132e8c43c4d118b179666947b83ad409cdf6c534b7b" exitCode=0 Mar 12 15:03:32 crc kubenswrapper[4832]: I0312 15:03:32.843191 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" event={"ID":"9842c418-b1db-4b37-b49a-8e7edbf04777","Type":"ContainerDied","Data":"1f02563506ae3c48a9cba132e8c43c4d118b179666947b83ad409cdf6c534b7b"} Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.852401 4832 generic.go:334] "Generic (PLEG): container finished" podID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerID="82fe5b6fcebe9627f14abc3c20c003e4027ff02953ac3388a1a871f2ea7ef7aa" exitCode=0 Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.852516 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" event={"ID":"9842c418-b1db-4b37-b49a-8e7edbf04777","Type":"ContainerDied","Data":"82fe5b6fcebe9627f14abc3c20c003e4027ff02953ac3388a1a871f2ea7ef7aa"} Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.915603 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xvp95"] Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.930474 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.938641 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvp95"] Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.953174 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrzg4\" (UniqueName: \"kubernetes.io/projected/588fb950-c095-436d-a0e7-1192048f4e45-kube-api-access-nrzg4\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.953216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-utilities\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:33 crc kubenswrapper[4832]: I0312 15:03:33.953240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-catalog-content\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.054261 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzg4\" (UniqueName: \"kubernetes.io/projected/588fb950-c095-436d-a0e7-1192048f4e45-kube-api-access-nrzg4\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.054322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-utilities\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.054350 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-catalog-content\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.054843 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-utilities\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.054884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-catalog-content\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.075316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzg4\" (UniqueName: \"kubernetes.io/projected/588fb950-c095-436d-a0e7-1192048f4e45-kube-api-access-nrzg4\") pod \"community-operators-xvp95\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.143601 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.247445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.716638 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvp95"] Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.863408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvp95" event={"ID":"588fb950-c095-436d-a0e7-1192048f4e45","Type":"ContainerStarted","Data":"08415d7359f303ce25a32b8b3d177778c6be2697d0c605dd58c32242fdbba856"} Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.866144 4832 generic.go:334] "Generic (PLEG): container finished" podID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerID="b8f64be966e17266fd97ebaaadebcb86e7f5cf8ccb9412eed86dafb7f21b223f" exitCode=0 Mar 12 15:03:34 crc kubenswrapper[4832]: I0312 15:03:34.866188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" event={"ID":"9842c418-b1db-4b37-b49a-8e7edbf04777","Type":"ContainerDied","Data":"b8f64be966e17266fd97ebaaadebcb86e7f5cf8ccb9412eed86dafb7f21b223f"} Mar 12 15:03:35 crc kubenswrapper[4832]: I0312 15:03:35.873034 4832 generic.go:334] "Generic (PLEG): container finished" podID="588fb950-c095-436d-a0e7-1192048f4e45" containerID="ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2" exitCode=0 Mar 12 15:03:35 crc kubenswrapper[4832]: I0312 15:03:35.873117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvp95" event={"ID":"588fb950-c095-436d-a0e7-1192048f4e45","Type":"ContainerDied","Data":"ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2"} Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.184959 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.283347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-bundle\") pod \"9842c418-b1db-4b37-b49a-8e7edbf04777\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.283404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb5p5\" (UniqueName: \"kubernetes.io/projected/9842c418-b1db-4b37-b49a-8e7edbf04777-kube-api-access-cb5p5\") pod \"9842c418-b1db-4b37-b49a-8e7edbf04777\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.283495 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-util\") pod \"9842c418-b1db-4b37-b49a-8e7edbf04777\" (UID: \"9842c418-b1db-4b37-b49a-8e7edbf04777\") " Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.284800 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-bundle" (OuterVolumeSpecName: "bundle") pod "9842c418-b1db-4b37-b49a-8e7edbf04777" (UID: "9842c418-b1db-4b37-b49a-8e7edbf04777"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.292943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9842c418-b1db-4b37-b49a-8e7edbf04777-kube-api-access-cb5p5" (OuterVolumeSpecName: "kube-api-access-cb5p5") pod "9842c418-b1db-4b37-b49a-8e7edbf04777" (UID: "9842c418-b1db-4b37-b49a-8e7edbf04777"). InnerVolumeSpecName "kube-api-access-cb5p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.304428 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-util" (OuterVolumeSpecName: "util") pod "9842c418-b1db-4b37-b49a-8e7edbf04777" (UID: "9842c418-b1db-4b37-b49a-8e7edbf04777"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.384788 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.384853 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb5p5\" (UniqueName: \"kubernetes.io/projected/9842c418-b1db-4b37-b49a-8e7edbf04777-kube-api-access-cb5p5\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.384882 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9842c418-b1db-4b37-b49a-8e7edbf04777-util\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.881793 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.881678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r" event={"ID":"9842c418-b1db-4b37-b49a-8e7edbf04777","Type":"ContainerDied","Data":"b4a3b10e5032616bb64b3aa2ace8b8874fa57637a70be3bb18de3906fc1bca1e"} Mar 12 15:03:36 crc kubenswrapper[4832]: I0312 15:03:36.882819 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a3b10e5032616bb64b3aa2ace8b8874fa57637a70be3bb18de3906fc1bca1e" Mar 12 15:03:37 crc kubenswrapper[4832]: I0312 15:03:37.891371 4832 generic.go:334] "Generic (PLEG): container finished" podID="588fb950-c095-436d-a0e7-1192048f4e45" containerID="e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67" exitCode=0 Mar 12 15:03:37 crc kubenswrapper[4832]: I0312 15:03:37.891484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvp95" event={"ID":"588fb950-c095-436d-a0e7-1192048f4e45","Type":"ContainerDied","Data":"e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67"} Mar 12 15:03:38 crc kubenswrapper[4832]: I0312 15:03:38.706590 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsw9m"] Mar 12 15:03:38 crc kubenswrapper[4832]: I0312 15:03:38.706930 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsw9m" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="registry-server" containerID="cri-o://3308ca3e31f80d21a1de70873666e99b61b07bdc8c2aced1be2ee354fc230085" gracePeriod=2 Mar 12 15:03:38 crc kubenswrapper[4832]: I0312 15:03:38.900300 4832 generic.go:334] "Generic (PLEG): container finished" podID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerID="3308ca3e31f80d21a1de70873666e99b61b07bdc8c2aced1be2ee354fc230085" exitCode=0 Mar 12 15:03:38 crc kubenswrapper[4832]: I0312 15:03:38.900490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsw9m" event={"ID":"6eb06dae-7ff2-4e55-9560-231cd2611332","Type":"ContainerDied","Data":"3308ca3e31f80d21a1de70873666e99b61b07bdc8c2aced1be2ee354fc230085"} Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.061028 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.221100 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-catalog-content\") pod \"6eb06dae-7ff2-4e55-9560-231cd2611332\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.221222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhbnm\" (UniqueName: \"kubernetes.io/projected/6eb06dae-7ff2-4e55-9560-231cd2611332-kube-api-access-rhbnm\") pod \"6eb06dae-7ff2-4e55-9560-231cd2611332\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.221308 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-utilities\") pod \"6eb06dae-7ff2-4e55-9560-231cd2611332\" (UID: \"6eb06dae-7ff2-4e55-9560-231cd2611332\") " Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.222489 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-utilities" (OuterVolumeSpecName: "utilities") pod "6eb06dae-7ff2-4e55-9560-231cd2611332" (UID: "6eb06dae-7ff2-4e55-9560-231cd2611332"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.226891 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb06dae-7ff2-4e55-9560-231cd2611332-kube-api-access-rhbnm" (OuterVolumeSpecName: "kube-api-access-rhbnm") pod "6eb06dae-7ff2-4e55-9560-231cd2611332" (UID: "6eb06dae-7ff2-4e55-9560-231cd2611332"). InnerVolumeSpecName "kube-api-access-rhbnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.284531 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb06dae-7ff2-4e55-9560-231cd2611332" (UID: "6eb06dae-7ff2-4e55-9560-231cd2611332"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.322583 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.322616 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb06dae-7ff2-4e55-9560-231cd2611332-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.322628 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhbnm\" (UniqueName: \"kubernetes.io/projected/6eb06dae-7ff2-4e55-9560-231cd2611332-kube-api-access-rhbnm\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.909605 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsw9m" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.909630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsw9m" event={"ID":"6eb06dae-7ff2-4e55-9560-231cd2611332","Type":"ContainerDied","Data":"0aa06ae60417b129af4f62e005181c5e61b048fa29295ad9b3b8d0fe05eb4048"} Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.910077 4832 scope.go:117] "RemoveContainer" containerID="3308ca3e31f80d21a1de70873666e99b61b07bdc8c2aced1be2ee354fc230085" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.921434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvp95" event={"ID":"588fb950-c095-436d-a0e7-1192048f4e45","Type":"ContainerStarted","Data":"c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92"} Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.938263 4832 scope.go:117] "RemoveContainer" containerID="92ad7a33e73014e4e339434de383a660495f3530eeb5e089cce347a623f4c499" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.942006 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xvp95" podStartSLOduration=3.534279487 podStartE2EDuration="6.94198968s" podCreationTimestamp="2026-03-12 15:03:33 +0000 UTC" firstStartedPulling="2026-03-12 15:03:35.87446445 +0000 UTC m=+974.518478676" lastFinishedPulling="2026-03-12 15:03:39.282174643 +0000 UTC m=+977.926188869" observedRunningTime="2026-03-12 15:03:39.939372704 +0000 UTC m=+978.583386930" watchObservedRunningTime="2026-03-12 15:03:39.94198968 +0000 UTC m=+978.586003906" Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.956384 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsw9m"] Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.965994 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsw9m"] Mar 12 15:03:39 crc kubenswrapper[4832]: I0312 15:03:39.974795 4832 scope.go:117] "RemoveContainer" containerID="73da4c4fd80a8ce687b61da5d9629cdf7f5e457ee9b74c63218e77196b08dafa" Mar 12 15:03:40 crc kubenswrapper[4832]: I0312 15:03:40.629233 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" path="/var/lib/kubelet/pods/6eb06dae-7ff2-4e55-9560-231cd2611332/volumes" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.994897 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq"] Mar 12 15:03:41 crc kubenswrapper[4832]: E0312 15:03:41.995508 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="extract-content" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995575 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="extract-content" Mar 12 15:03:41 crc kubenswrapper[4832]: E0312 15:03:41.995593 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="extract-utilities" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995604 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="extract-utilities" Mar 12 15:03:41 crc kubenswrapper[4832]: E0312 15:03:41.995615 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerName="util" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995622 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerName="util" Mar 12 15:03:41 crc kubenswrapper[4832]: E0312 15:03:41.995635 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerName="extract" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995642 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerName="extract" Mar 12 15:03:41 crc kubenswrapper[4832]: E0312 15:03:41.995652 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="registry-server" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995661 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="registry-server" Mar 12 15:03:41 crc kubenswrapper[4832]: E0312 15:03:41.995671 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerName="pull" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995678 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerName="pull" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995798 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9842c418-b1db-4b37-b49a-8e7edbf04777" containerName="extract" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.995812 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb06dae-7ff2-4e55-9560-231cd2611332" containerName="registry-server" Mar 12 15:03:41 crc kubenswrapper[4832]: I0312 15:03:41.996300 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.003507 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-65f7l" Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.020448 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq"] Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.162000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4rw\" (UniqueName: \"kubernetes.io/projected/cdebf23f-0836-48d1-9edf-c72140fa5f77-kube-api-access-lt4rw\") pod \"openstack-operator-controller-init-666b5bf768-rllmq\" (UID: \"cdebf23f-0836-48d1-9edf-c72140fa5f77\") " pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.263104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt4rw\" (UniqueName: \"kubernetes.io/projected/cdebf23f-0836-48d1-9edf-c72140fa5f77-kube-api-access-lt4rw\") pod \"openstack-operator-controller-init-666b5bf768-rllmq\" (UID: \"cdebf23f-0836-48d1-9edf-c72140fa5f77\") " pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.282824 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt4rw\" (UniqueName: \"kubernetes.io/projected/cdebf23f-0836-48d1-9edf-c72140fa5f77-kube-api-access-lt4rw\") pod \"openstack-operator-controller-init-666b5bf768-rllmq\" (UID: \"cdebf23f-0836-48d1-9edf-c72140fa5f77\") " pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.313691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.767339 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq"] Mar 12 15:03:42 crc kubenswrapper[4832]: W0312 15:03:42.787192 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdebf23f_0836_48d1_9edf_c72140fa5f77.slice/crio-b6a2e0206f3ab0e7dd36ef804215edcc48ad686d014ecae8b95fd8d900ef865c WatchSource:0}: Error finding container b6a2e0206f3ab0e7dd36ef804215edcc48ad686d014ecae8b95fd8d900ef865c: Status 404 returned error can't find the container with id b6a2e0206f3ab0e7dd36ef804215edcc48ad686d014ecae8b95fd8d900ef865c Mar 12 15:03:42 crc kubenswrapper[4832]: I0312 15:03:42.947859 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" event={"ID":"cdebf23f-0836-48d1-9edf-c72140fa5f77","Type":"ContainerStarted","Data":"b6a2e0206f3ab0e7dd36ef804215edcc48ad686d014ecae8b95fd8d900ef865c"} Mar 12 15:03:44 crc kubenswrapper[4832]: I0312 15:03:44.249199 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:44 crc kubenswrapper[4832]: I0312 15:03:44.249559 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:44 crc kubenswrapper[4832]: I0312 15:03:44.297226 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:45 crc kubenswrapper[4832]: I0312 15:03:45.004416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:46 crc kubenswrapper[4832]: I0312 15:03:46.978125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" event={"ID":"cdebf23f-0836-48d1-9edf-c72140fa5f77","Type":"ContainerStarted","Data":"bf61de10445d905165a416fb47bc97edd973544bcb4c939dc3230302abc1ac75"} Mar 12 15:03:46 crc kubenswrapper[4832]: I0312 15:03:46.978495 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" Mar 12 15:03:47 crc kubenswrapper[4832]: I0312 15:03:47.011080 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" podStartSLOduration=2.076938716 podStartE2EDuration="6.011068848s" podCreationTimestamp="2026-03-12 15:03:41 +0000 UTC" firstStartedPulling="2026-03-12 15:03:42.79043654 +0000 UTC m=+981.434450766" lastFinishedPulling="2026-03-12 15:03:46.724566672 +0000 UTC m=+985.368580898" observedRunningTime="2026-03-12 15:03:47.00804464 +0000 UTC m=+985.652058876" watchObservedRunningTime="2026-03-12 15:03:47.011068848 +0000 UTC m=+985.655083074" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.105399 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvp95"] Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.105669 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xvp95" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="registry-server" containerID="cri-o://c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92" gracePeriod=2 Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.588195 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.764788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrzg4\" (UniqueName: \"kubernetes.io/projected/588fb950-c095-436d-a0e7-1192048f4e45-kube-api-access-nrzg4\") pod \"588fb950-c095-436d-a0e7-1192048f4e45\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.764877 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-utilities\") pod \"588fb950-c095-436d-a0e7-1192048f4e45\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.764901 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-catalog-content\") pod \"588fb950-c095-436d-a0e7-1192048f4e45\" (UID: \"588fb950-c095-436d-a0e7-1192048f4e45\") " Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.766205 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-utilities" (OuterVolumeSpecName: "utilities") pod "588fb950-c095-436d-a0e7-1192048f4e45" (UID: "588fb950-c095-436d-a0e7-1192048f4e45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.773757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588fb950-c095-436d-a0e7-1192048f4e45-kube-api-access-nrzg4" (OuterVolumeSpecName: "kube-api-access-nrzg4") pod "588fb950-c095-436d-a0e7-1192048f4e45" (UID: "588fb950-c095-436d-a0e7-1192048f4e45"). InnerVolumeSpecName "kube-api-access-nrzg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.836657 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "588fb950-c095-436d-a0e7-1192048f4e45" (UID: "588fb950-c095-436d-a0e7-1192048f4e45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.866415 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.866462 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588fb950-c095-436d-a0e7-1192048f4e45-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.866476 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrzg4\" (UniqueName: \"kubernetes.io/projected/588fb950-c095-436d-a0e7-1192048f4e45-kube-api-access-nrzg4\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.993391 4832 generic.go:334] "Generic (PLEG): container finished" podID="588fb950-c095-436d-a0e7-1192048f4e45" containerID="c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92" exitCode=0 Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.993477 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvp95" Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.993500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvp95" event={"ID":"588fb950-c095-436d-a0e7-1192048f4e45","Type":"ContainerDied","Data":"c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92"} Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.993574 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvp95" event={"ID":"588fb950-c095-436d-a0e7-1192048f4e45","Type":"ContainerDied","Data":"08415d7359f303ce25a32b8b3d177778c6be2697d0c605dd58c32242fdbba856"} Mar 12 15:03:48 crc kubenswrapper[4832]: I0312 15:03:48.993591 4832 scope.go:117] "RemoveContainer" containerID="c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.012597 4832 scope.go:117] "RemoveContainer" containerID="e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.036458 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvp95"] Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.037418 4832 scope.go:117] "RemoveContainer" containerID="ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.040820 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xvp95"] Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.068278 4832 scope.go:117] "RemoveContainer" containerID="c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92" Mar 12 15:03:49 crc kubenswrapper[4832]: E0312 15:03:49.068976 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92\": container with ID starting with c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92 not found: ID does not exist" containerID="c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.069080 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92"} err="failed to get container status \"c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92\": rpc error: code = NotFound desc = could not find container \"c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92\": container with ID starting with c000672b32b223808d1baedb16595e354783d1dad1aae142aaf4438fae55fa92 not found: ID does not exist" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.069175 4832 scope.go:117] "RemoveContainer" containerID="e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67" Mar 12 15:03:49 crc kubenswrapper[4832]: E0312 15:03:49.069466 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67\": container with ID starting with e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67 not found: ID does not exist" containerID="e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.069488 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67"} err="failed to get container status \"e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67\": rpc error: code = NotFound desc = could not find container \"e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67\": container with ID starting with e77d7eaddeeb298fb775caff8e1db4c8afbb76abfefcd04c92cba18a9cc2ac67 not found: ID does not exist" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.069522 4832 scope.go:117] "RemoveContainer" containerID="ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2" Mar 12 15:03:49 crc kubenswrapper[4832]: E0312 15:03:49.069803 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2\": container with ID starting with ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2 not found: ID does not exist" containerID="ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2" Mar 12 15:03:49 crc kubenswrapper[4832]: I0312 15:03:49.069824 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2"} err="failed to get container status \"ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2\": rpc error: code = NotFound desc = could not find container \"ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2\": container with ID starting with ce922dce0eaae1d619ec5044c1f152fbb823764206181e7c65372589af2b45b2 not found: ID does not exist" Mar 12 15:03:50 crc kubenswrapper[4832]: I0312 15:03:50.632735 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588fb950-c095-436d-a0e7-1192048f4e45" path="/var/lib/kubelet/pods/588fb950-c095-436d-a0e7-1192048f4e45/volumes" Mar 12 15:03:52 crc kubenswrapper[4832]: I0312 15:03:52.316729 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-666b5bf768-rllmq" Mar 12 15:03:56 crc kubenswrapper[4832]: I0312 15:03:56.314536 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:03:56 crc kubenswrapper[4832]: I0312 15:03:56.314903 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.184248 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555464-drb4s"] Mar 12 15:04:00 crc kubenswrapper[4832]: E0312 15:04:00.185394 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="extract-utilities" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.185470 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="extract-utilities" Mar 12 15:04:00 crc kubenswrapper[4832]: E0312 15:04:00.185547 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="registry-server" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.185633 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="registry-server" Mar 12 15:04:00 crc kubenswrapper[4832]: E0312 15:04:00.185713 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="extract-content" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.185763 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="extract-content" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.185910 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="588fb950-c095-436d-a0e7-1192048f4e45" containerName="registry-server" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.186343 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-drb4s" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.193898 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.200601 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.200944 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.239338 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-drb4s"] Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.313945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dtg\" (UniqueName: \"kubernetes.io/projected/7bd68034-e452-409f-aeb1-121908cb2498-kube-api-access-75dtg\") pod \"auto-csr-approver-29555464-drb4s\" (UID: \"7bd68034-e452-409f-aeb1-121908cb2498\") " pod="openshift-infra/auto-csr-approver-29555464-drb4s" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.414686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75dtg\" (UniqueName: \"kubernetes.io/projected/7bd68034-e452-409f-aeb1-121908cb2498-kube-api-access-75dtg\") pod \"auto-csr-approver-29555464-drb4s\" (UID: \"7bd68034-e452-409f-aeb1-121908cb2498\") " pod="openshift-infra/auto-csr-approver-29555464-drb4s" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.439225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75dtg\" (UniqueName: \"kubernetes.io/projected/7bd68034-e452-409f-aeb1-121908cb2498-kube-api-access-75dtg\") pod \"auto-csr-approver-29555464-drb4s\" (UID: \"7bd68034-e452-409f-aeb1-121908cb2498\") " pod="openshift-infra/auto-csr-approver-29555464-drb4s" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.501715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-drb4s" Mar 12 15:04:00 crc kubenswrapper[4832]: I0312 15:04:00.744327 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-drb4s"] Mar 12 15:04:01 crc kubenswrapper[4832]: I0312 15:04:01.083394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-drb4s" event={"ID":"7bd68034-e452-409f-aeb1-121908cb2498","Type":"ContainerStarted","Data":"4fd73e126cfb9cf69ba38c4183fdcaf9b79265ac19d0ed202860818c7b4438b1"} Mar 12 15:04:02 crc kubenswrapper[4832]: I0312 15:04:02.091089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-drb4s" event={"ID":"7bd68034-e452-409f-aeb1-121908cb2498","Type":"ContainerStarted","Data":"8f449e1ebac5bcdb683faf9c6dcdb34c6f4dbf1557ac4f355e25bbb97c9ca157"} Mar 12 15:04:02 crc kubenswrapper[4832]: I0312 15:04:02.108068 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555464-drb4s" podStartSLOduration=1.234300441 podStartE2EDuration="2.10804518s" podCreationTimestamp="2026-03-12 15:04:00 +0000 UTC" firstStartedPulling="2026-03-12 15:04:00.757456438 +0000 UTC m=+999.401470664" lastFinishedPulling="2026-03-12 15:04:01.631201137 +0000 UTC m=+1000.275215403" observedRunningTime="2026-03-12 15:04:02.104982141 +0000 UTC m=+1000.748996377" watchObservedRunningTime="2026-03-12 15:04:02.10804518 +0000 UTC m=+1000.752059416" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.098102 4832 generic.go:334] "Generic (PLEG): container finished" podID="7bd68034-e452-409f-aeb1-121908cb2498" containerID="8f449e1ebac5bcdb683faf9c6dcdb34c6f4dbf1557ac4f355e25bbb97c9ca157" exitCode=0 Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.098179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-drb4s" event={"ID":"7bd68034-e452-409f-aeb1-121908cb2498","Type":"ContainerDied","Data":"8f449e1ebac5bcdb683faf9c6dcdb34c6f4dbf1557ac4f355e25bbb97c9ca157"} Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.716287 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r5shw"] Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.718157 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.727164 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5shw"] Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.767394 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-catalog-content\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.767635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7m7z\" (UniqueName: \"kubernetes.io/projected/44167a86-7de7-4855-9e75-3f04b5e446fe-kube-api-access-t7m7z\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.767849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-utilities\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.868337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-catalog-content\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.868382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7m7z\" (UniqueName: \"kubernetes.io/projected/44167a86-7de7-4855-9e75-3f04b5e446fe-kube-api-access-t7m7z\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.868443 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-utilities\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.868816 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-catalog-content\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.868870 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-utilities\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:03 crc kubenswrapper[4832]: I0312 15:04:03.887036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7m7z\" (UniqueName: \"kubernetes.io/projected/44167a86-7de7-4855-9e75-3f04b5e446fe-kube-api-access-t7m7z\") pod \"redhat-marketplace-r5shw\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:04 crc kubenswrapper[4832]: I0312 15:04:04.034383 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:04 crc kubenswrapper[4832]: I0312 15:04:04.377566 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-drb4s" Mar 12 15:04:04 crc kubenswrapper[4832]: I0312 15:04:04.494989 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5shw"] Mar 12 15:04:04 crc kubenswrapper[4832]: W0312 15:04:04.495361 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44167a86_7de7_4855_9e75_3f04b5e446fe.slice/crio-fe1d33e5e1357305e73c804e13cae6bc9e8bdd4cdffeff2cd303f4a1393bf7c0 WatchSource:0}: Error finding container fe1d33e5e1357305e73c804e13cae6bc9e8bdd4cdffeff2cd303f4a1393bf7c0: Status 404 returned error can't find the container with id fe1d33e5e1357305e73c804e13cae6bc9e8bdd4cdffeff2cd303f4a1393bf7c0 Mar 12 15:04:04 crc kubenswrapper[4832]: I0312 15:04:04.579626 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75dtg\" (UniqueName: \"kubernetes.io/projected/7bd68034-e452-409f-aeb1-121908cb2498-kube-api-access-75dtg\") pod \"7bd68034-e452-409f-aeb1-121908cb2498\" (UID: \"7bd68034-e452-409f-aeb1-121908cb2498\") " Mar 12 15:04:04 crc kubenswrapper[4832]: I0312 15:04:04.588211 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd68034-e452-409f-aeb1-121908cb2498-kube-api-access-75dtg" (OuterVolumeSpecName: "kube-api-access-75dtg") pod "7bd68034-e452-409f-aeb1-121908cb2498" (UID: "7bd68034-e452-409f-aeb1-121908cb2498"). InnerVolumeSpecName "kube-api-access-75dtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:04:04 crc kubenswrapper[4832]: I0312 15:04:04.681523 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75dtg\" (UniqueName: \"kubernetes.io/projected/7bd68034-e452-409f-aeb1-121908cb2498-kube-api-access-75dtg\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.112146 4832 generic.go:334] "Generic (PLEG): container finished" podID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerID="b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015" exitCode=0 Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.112258 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5shw" event={"ID":"44167a86-7de7-4855-9e75-3f04b5e446fe","Type":"ContainerDied","Data":"b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015"} Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.112462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5shw" event={"ID":"44167a86-7de7-4855-9e75-3f04b5e446fe","Type":"ContainerStarted","Data":"fe1d33e5e1357305e73c804e13cae6bc9e8bdd4cdffeff2cd303f4a1393bf7c0"} Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.116582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-drb4s" event={"ID":"7bd68034-e452-409f-aeb1-121908cb2498","Type":"ContainerDied","Data":"4fd73e126cfb9cf69ba38c4183fdcaf9b79265ac19d0ed202860818c7b4438b1"} Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.116634 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd73e126cfb9cf69ba38c4183fdcaf9b79265ac19d0ed202860818c7b4438b1" Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.116651 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-drb4s" Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.156439 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-n99rk"] Mar 12 15:04:05 crc kubenswrapper[4832]: I0312 15:04:05.165487 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-n99rk"] Mar 12 15:04:06 crc kubenswrapper[4832]: I0312 15:04:06.125231 4832 generic.go:334] "Generic (PLEG): container finished" podID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerID="1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6" exitCode=0 Mar 12 15:04:06 crc kubenswrapper[4832]: I0312 15:04:06.125267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5shw" event={"ID":"44167a86-7de7-4855-9e75-3f04b5e446fe","Type":"ContainerDied","Data":"1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6"} Mar 12 15:04:06 crc kubenswrapper[4832]: I0312 15:04:06.627667 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046f9728-41b1-4ad1-848b-4f34b021e284" path="/var/lib/kubelet/pods/046f9728-41b1-4ad1-848b-4f34b021e284/volumes" Mar 12 15:04:07 crc kubenswrapper[4832]: I0312 15:04:07.135285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5shw" event={"ID":"44167a86-7de7-4855-9e75-3f04b5e446fe","Type":"ContainerStarted","Data":"463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486"} Mar 12 15:04:07 crc kubenswrapper[4832]: I0312 15:04:07.169078 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r5shw" podStartSLOduration=2.739468068 podStartE2EDuration="4.169059239s" podCreationTimestamp="2026-03-12 15:04:03 +0000 UTC" firstStartedPulling="2026-03-12 15:04:05.114371967 +0000 UTC m=+1003.758386203" lastFinishedPulling="2026-03-12 15:04:06.543963148 +0000 UTC m=+1005.187977374" observedRunningTime="2026-03-12 15:04:07.166564807 +0000 UTC m=+1005.810579033" watchObservedRunningTime="2026-03-12 15:04:07.169059239 +0000 UTC m=+1005.813073465" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.545054 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx"] Mar 12 15:04:12 crc kubenswrapper[4832]: E0312 15:04:12.545994 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd68034-e452-409f-aeb1-121908cb2498" containerName="oc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.546009 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd68034-e452-409f-aeb1-121908cb2498" containerName="oc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.546153 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd68034-e452-409f-aeb1-121908cb2498" containerName="oc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.546632 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.548249 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fmv8q" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.564482 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.583664 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.584862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.595894 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6qwvq" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.604225 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.608788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.614492 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.615897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh7hn\" (UniqueName: \"kubernetes.io/projected/f258cf7f-c099-40f8-94be-4e0ec5252d88-kube-api-access-qh7hn\") pod \"cinder-operator-controller-manager-984cd4dcf-rnjmc\" (UID: \"f258cf7f-c099-40f8-94be-4e0ec5252d88\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.616001 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln4xn\" (UniqueName: \"kubernetes.io/projected/1361287f-20d2-4603-acad-c6b3a79040b2-kube-api-access-ln4xn\") pod \"barbican-operator-controller-manager-677bd678f7-ckrwx\" (UID: \"1361287f-20d2-4603-acad-c6b3a79040b2\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.616067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfb8\" (UniqueName: \"kubernetes.io/projected/3fd6082b-ab5f-434e-9585-2bcc34c7cba9-kube-api-access-rjfb8\") pod \"designate-operator-controller-manager-66d56f6ff4-sxlwm\" (UID: \"3fd6082b-ab5f-434e-9585-2bcc34c7cba9\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.623179 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wpxdp" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.647869 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.681583 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.682627 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.687332 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hcw8l" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.692715 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.693762 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.697800 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5v29r" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.701621 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.707336 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.712101 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.713044 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.716812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szxc\" (UniqueName: \"kubernetes.io/projected/f8760bef-6ca3-412a-b8bc-49de609fe9d3-kube-api-access-7szxc\") pod \"heat-operator-controller-manager-77b6666d85-2sljf\" (UID: \"f8760bef-6ca3-412a-b8bc-49de609fe9d3\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.716864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfb8\" (UniqueName: \"kubernetes.io/projected/3fd6082b-ab5f-434e-9585-2bcc34c7cba9-kube-api-access-rjfb8\") pod \"designate-operator-controller-manager-66d56f6ff4-sxlwm\" (UID: \"3fd6082b-ab5f-434e-9585-2bcc34c7cba9\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.716893 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxf2z\" (UniqueName: \"kubernetes.io/projected/2a4714d0-f39b-499a-88aa-e960dad0e00b-kube-api-access-cxf2z\") pod \"horizon-operator-controller-manager-6d9d6b584d-r6kt4\" (UID: \"2a4714d0-f39b-499a-88aa-e960dad0e00b\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.716926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh7hn\" (UniqueName: \"kubernetes.io/projected/f258cf7f-c099-40f8-94be-4e0ec5252d88-kube-api-access-qh7hn\") pod \"cinder-operator-controller-manager-984cd4dcf-rnjmc\" (UID: \"f258cf7f-c099-40f8-94be-4e0ec5252d88\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.716951 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-622ch\" (UniqueName: \"kubernetes.io/projected/6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f-kube-api-access-622ch\") pod \"glance-operator-controller-manager-5964f64c48-2vj9s\" (UID: \"6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.716971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln4xn\" (UniqueName: \"kubernetes.io/projected/1361287f-20d2-4603-acad-c6b3a79040b2-kube-api-access-ln4xn\") pod \"barbican-operator-controller-manager-677bd678f7-ckrwx\" (UID: \"1361287f-20d2-4603-acad-c6b3a79040b2\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.720725 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-8hj9n" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.724687 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.733120 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.734368 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.744140 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.744917 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.746412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln4xn\" (UniqueName: \"kubernetes.io/projected/1361287f-20d2-4603-acad-c6b3a79040b2-kube-api-access-ln4xn\") pod \"barbican-operator-controller-manager-677bd678f7-ckrwx\" (UID: \"1361287f-20d2-4603-acad-c6b3a79040b2\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.746679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.747333 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8qdtc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.749895 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7pzcn" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.750914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh7hn\" (UniqueName: \"kubernetes.io/projected/f258cf7f-c099-40f8-94be-4e0ec5252d88-kube-api-access-qh7hn\") pod \"cinder-operator-controller-manager-984cd4dcf-rnjmc\" (UID: \"f258cf7f-c099-40f8-94be-4e0ec5252d88\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.754664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.766783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfb8\" (UniqueName: \"kubernetes.io/projected/3fd6082b-ab5f-434e-9585-2bcc34c7cba9-kube-api-access-rjfb8\") pod \"designate-operator-controller-manager-66d56f6ff4-sxlwm\" (UID: \"3fd6082b-ab5f-434e-9585-2bcc34c7cba9\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.787733 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.788965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.792952 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hxhd6" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.823695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxf2z\" (UniqueName: \"kubernetes.io/projected/2a4714d0-f39b-499a-88aa-e960dad0e00b-kube-api-access-cxf2z\") pod \"horizon-operator-controller-manager-6d9d6b584d-r6kt4\" (UID: \"2a4714d0-f39b-499a-88aa-e960dad0e00b\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.823787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-622ch\" (UniqueName: \"kubernetes.io/projected/6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f-kube-api-access-622ch\") pod \"glance-operator-controller-manager-5964f64c48-2vj9s\" (UID: \"6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.823856 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szxc\" (UniqueName: \"kubernetes.io/projected/f8760bef-6ca3-412a-b8bc-49de609fe9d3-kube-api-access-7szxc\") pod \"heat-operator-controller-manager-77b6666d85-2sljf\" (UID: \"f8760bef-6ca3-412a-b8bc-49de609fe9d3\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.824822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.840785 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.865834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-622ch\" (UniqueName: \"kubernetes.io/projected/6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f-kube-api-access-622ch\") pod \"glance-operator-controller-manager-5964f64c48-2vj9s\" (UID: \"6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.876202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.876430 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxf2z\" (UniqueName: \"kubernetes.io/projected/2a4714d0-f39b-499a-88aa-e960dad0e00b-kube-api-access-cxf2z\") pod \"horizon-operator-controller-manager-6d9d6b584d-r6kt4\" (UID: \"2a4714d0-f39b-499a-88aa-e960dad0e00b\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.881730 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.882491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.883103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szxc\" (UniqueName: \"kubernetes.io/projected/f8760bef-6ca3-412a-b8bc-49de609fe9d3-kube-api-access-7szxc\") pod \"heat-operator-controller-manager-77b6666d85-2sljf\" (UID: \"f8760bef-6ca3-412a-b8bc-49de609fe9d3\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.891920 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7k9ld" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.899873 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.925018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjv4\" (UniqueName: \"kubernetes.io/projected/572740f5-e207-4372-ab19-2b117aa31c69-kube-api-access-ldjv4\") pod \"keystone-operator-controller-manager-684f77d66d-8nlxs\" (UID: \"572740f5-e207-4372-ab19-2b117aa31c69\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.925079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x7rk\" (UniqueName: \"kubernetes.io/projected/999db9dc-984c-40aa-be0f-1d98b78bf44f-kube-api-access-9x7rk\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.925100 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.925131 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wgmg\" (UniqueName: \"kubernetes.io/projected/e636b173-51b8-4325-a1cf-dcea5406cdee-kube-api-access-9wgmg\") pod \"ironic-operator-controller-manager-6bbb499bbc-jr5hd\" (UID: \"e636b173-51b8-4325-a1cf-dcea5406cdee\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.933785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.949842 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.974940 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.975695 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.977013 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.985570 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sc6vt" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.985920 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8"] Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.986695 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" Mar 12 15:04:12 crc kubenswrapper[4832]: I0312 15:04:12.989743 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pjp4f" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.023010 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.026957 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqzt\" (UniqueName: \"kubernetes.io/projected/5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b-kube-api-access-2zqzt\") pod \"manila-operator-controller-manager-68f45f9d9f-lvnqm\" (UID: \"5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.026996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldjv4\" (UniqueName: \"kubernetes.io/projected/572740f5-e207-4372-ab19-2b117aa31c69-kube-api-access-ldjv4\") pod \"keystone-operator-controller-manager-684f77d66d-8nlxs\" (UID: \"572740f5-e207-4372-ab19-2b117aa31c69\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.027039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x7rk\" (UniqueName: \"kubernetes.io/projected/999db9dc-984c-40aa-be0f-1d98b78bf44f-kube-api-access-9x7rk\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.027058 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.027085 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wgmg\" (UniqueName: \"kubernetes.io/projected/e636b173-51b8-4325-a1cf-dcea5406cdee-kube-api-access-9wgmg\") pod \"ironic-operator-controller-manager-6bbb499bbc-jr5hd\" (UID: \"e636b173-51b8-4325-a1cf-dcea5406cdee\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.027581 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.027628 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert podName:999db9dc-984c-40aa-be0f-1d98b78bf44f nodeName:}" failed. No retries permitted until 2026-03-12 15:04:13.527609186 +0000 UTC m=+1012.171623412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert") pod "infra-operator-controller-manager-5995f4446f-qk7gq" (UID: "999db9dc-984c-40aa-be0f-1d98b78bf44f") : secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.028655 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.040696 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.041699 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.045385 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-4nd8s" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.054294 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.055567 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.063987 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2sz5w" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.064460 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.065212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldjv4\" (UniqueName: \"kubernetes.io/projected/572740f5-e207-4372-ab19-2b117aa31c69-kube-api-access-ldjv4\") pod \"keystone-operator-controller-manager-684f77d66d-8nlxs\" (UID: \"572740f5-e207-4372-ab19-2b117aa31c69\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.075538 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wgmg\" (UniqueName: \"kubernetes.io/projected/e636b173-51b8-4325-a1cf-dcea5406cdee-kube-api-access-9wgmg\") pod \"ironic-operator-controller-manager-6bbb499bbc-jr5hd\" (UID: \"e636b173-51b8-4325-a1cf-dcea5406cdee\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.096442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x7rk\" (UniqueName: \"kubernetes.io/projected/999db9dc-984c-40aa-be0f-1d98b78bf44f-kube-api-access-9x7rk\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.101541 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.136169 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dxt\" (UniqueName: \"kubernetes.io/projected/3272780b-2460-41f3-bc98-7ff7708bda6f-kube-api-access-95dxt\") pod \"mariadb-operator-controller-manager-658d4cdd5-9sw66\" (UID: \"3272780b-2460-41f3-bc98-7ff7708bda6f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.136277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqzt\" (UniqueName: \"kubernetes.io/projected/5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b-kube-api-access-2zqzt\") pod \"manila-operator-controller-manager-68f45f9d9f-lvnqm\" (UID: \"5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.137623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5szh\" (UniqueName: \"kubernetes.io/projected/6fe61dcf-24b0-4c97-9639-15335615d4d4-kube-api-access-m5szh\") pod \"neutron-operator-controller-manager-776c5696bf-8ngt8\" (UID: \"6fe61dcf-24b0-4c97-9639-15335615d4d4\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.137732 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.142752 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.143015 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.154864 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.160762 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.161784 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.169708 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pm9qj" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.173860 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.174992 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.182291 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzdwp" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.182486 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.183905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqzt\" (UniqueName: \"kubernetes.io/projected/5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b-kube-api-access-2zqzt\") pod \"manila-operator-controller-manager-68f45f9d9f-lvnqm\" (UID: \"5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.193910 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.208618 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.219763 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.225205 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.227110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.232576 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.233162 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kt9n8" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245459 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrgn\" (UniqueName: \"kubernetes.io/projected/78b5b9cf-6e4a-4ac8-8611-06b417453f45-kube-api-access-xdrgn\") pod \"nova-operator-controller-manager-569cc54c5-2z4xv\" (UID: \"78b5b9cf-6e4a-4ac8-8611-06b417453f45\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnwh\" (UniqueName: \"kubernetes.io/projected/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-kube-api-access-lqnwh\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245528 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5fr\" (UniqueName: \"kubernetes.io/projected/6c25d60c-d053-4b33-9ddd-8a95f18480f7-kube-api-access-dt5fr\") pod \"octavia-operator-controller-manager-5f4f55cb5c-4xd9n\" (UID: \"6c25d60c-d053-4b33-9ddd-8a95f18480f7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95dxt\" (UniqueName: \"kubernetes.io/projected/3272780b-2460-41f3-bc98-7ff7708bda6f-kube-api-access-95dxt\") pod \"mariadb-operator-controller-manager-658d4cdd5-9sw66\" (UID: \"3272780b-2460-41f3-bc98-7ff7708bda6f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqsrp\" (UniqueName: \"kubernetes.io/projected/3d58b640-14cf-4576-b441-448a87e34b04-kube-api-access-mqsrp\") pod \"placement-operator-controller-manager-574d45c66c-6t9gx\" (UID: \"3d58b640-14cf-4576-b441-448a87e34b04\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4vxh\" (UniqueName: \"kubernetes.io/projected/55d4a1b5-5971-426d-91dd-9a8f991552c0-kube-api-access-w4vxh\") pod \"ovn-operator-controller-manager-bbc5b68f9-qdn75\" (UID: \"55d4a1b5-5971-426d-91dd-9a8f991552c0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245719 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5szh\" (UniqueName: \"kubernetes.io/projected/6fe61dcf-24b0-4c97-9639-15335615d4d4-kube-api-access-m5szh\") pod \"neutron-operator-controller-manager-776c5696bf-8ngt8\" (UID: \"6fe61dcf-24b0-4c97-9639-15335615d4d4\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.245779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.249318 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.250099 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.258015 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-prg7t" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.266321 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.267458 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.269246 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fj6l2" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.277715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95dxt\" (UniqueName: \"kubernetes.io/projected/3272780b-2460-41f3-bc98-7ff7708bda6f-kube-api-access-95dxt\") pod \"mariadb-operator-controller-manager-658d4cdd5-9sw66\" (UID: \"3272780b-2460-41f3-bc98-7ff7708bda6f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.295545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5szh\" (UniqueName: \"kubernetes.io/projected/6fe61dcf-24b0-4c97-9639-15335615d4d4-kube-api-access-m5szh\") pod \"neutron-operator-controller-manager-776c5696bf-8ngt8\" (UID: \"6fe61dcf-24b0-4c97-9639-15335615d4d4\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.308416 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.311076 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.319546 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.319951 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.323290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.329954 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.333778 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6wq6b" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.350898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.351002 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.351043 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert podName:fbce7e5d-a791-4984-94c9-3bfdc12d70b9 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:13.851029372 +0000 UTC m=+1012.495043598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7b625q" (UID: "fbce7e5d-a791-4984-94c9-3bfdc12d70b9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351214 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4q68\" (UniqueName: \"kubernetes.io/projected/555d3165-c8b4-4bd9-bdc9-2e988734971b-kube-api-access-z4q68\") pod \"swift-operator-controller-manager-677c674df7-c6wkm\" (UID: \"555d3165-c8b4-4bd9-bdc9-2e988734971b\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrgn\" (UniqueName: \"kubernetes.io/projected/78b5b9cf-6e4a-4ac8-8611-06b417453f45-kube-api-access-xdrgn\") pod \"nova-operator-controller-manager-569cc54c5-2z4xv\" (UID: \"78b5b9cf-6e4a-4ac8-8611-06b417453f45\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351263 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqnwh\" (UniqueName: \"kubernetes.io/projected/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-kube-api-access-lqnwh\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351279 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5fr\" (UniqueName: \"kubernetes.io/projected/6c25d60c-d053-4b33-9ddd-8a95f18480f7-kube-api-access-dt5fr\") pod \"octavia-operator-controller-manager-5f4f55cb5c-4xd9n\" (UID: \"6c25d60c-d053-4b33-9ddd-8a95f18480f7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351299 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsrp\" (UniqueName: \"kubernetes.io/projected/3d58b640-14cf-4576-b441-448a87e34b04-kube-api-access-mqsrp\") pod \"placement-operator-controller-manager-574d45c66c-6t9gx\" (UID: \"3d58b640-14cf-4576-b441-448a87e34b04\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351340 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnq7\" (UniqueName: \"kubernetes.io/projected/6b8d3e31-3f6c-4be0-b289-cd5afd6bb142-kube-api-access-lfnq7\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5qrl6\" (UID: \"6b8d3e31-3f6c-4be0-b289-cd5afd6bb142\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4z92\" (UniqueName: \"kubernetes.io/projected/65719325-3b5a-4c67-add5-446fbadb2951-kube-api-access-s4z92\") pod \"test-operator-controller-manager-5c5cb9c4d7-djk4r\" (UID: \"65719325-3b5a-4c67-add5-446fbadb2951\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4vxh\" (UniqueName: \"kubernetes.io/projected/55d4a1b5-5971-426d-91dd-9a8f991552c0-kube-api-access-w4vxh\") pod \"ovn-operator-controller-manager-bbc5b68f9-qdn75\" (UID: \"55d4a1b5-5971-426d-91dd-9a8f991552c0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.351969 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.361589 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.362393 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.377637 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.378232 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zbv7b" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.414795 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.416422 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.419447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4vxh\" (UniqueName: \"kubernetes.io/projected/55d4a1b5-5971-426d-91dd-9a8f991552c0-kube-api-access-w4vxh\") pod \"ovn-operator-controller-manager-bbc5b68f9-qdn75\" (UID: \"55d4a1b5-5971-426d-91dd-9a8f991552c0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.419841 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.419952 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-c46tn" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.420069 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.429030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsrp\" (UniqueName: \"kubernetes.io/projected/3d58b640-14cf-4576-b441-448a87e34b04-kube-api-access-mqsrp\") pod \"placement-operator-controller-manager-574d45c66c-6t9gx\" (UID: \"3d58b640-14cf-4576-b441-448a87e34b04\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.430147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqnwh\" (UniqueName: \"kubernetes.io/projected/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-kube-api-access-lqnwh\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.432603 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5fr\" (UniqueName: \"kubernetes.io/projected/6c25d60c-d053-4b33-9ddd-8a95f18480f7-kube-api-access-dt5fr\") pod \"octavia-operator-controller-manager-5f4f55cb5c-4xd9n\" (UID: \"6c25d60c-d053-4b33-9ddd-8a95f18480f7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.437977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrgn\" (UniqueName: \"kubernetes.io/projected/78b5b9cf-6e4a-4ac8-8611-06b417453f45-kube-api-access-xdrgn\") pod \"nova-operator-controller-manager-569cc54c5-2z4xv\" (UID: \"78b5b9cf-6e4a-4ac8-8611-06b417453f45\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.438140 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.451918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4q68\" (UniqueName: \"kubernetes.io/projected/555d3165-c8b4-4bd9-bdc9-2e988734971b-kube-api-access-z4q68\") pod \"swift-operator-controller-manager-677c674df7-c6wkm\" (UID: \"555d3165-c8b4-4bd9-bdc9-2e988734971b\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.451976 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.452016 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.452034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfnq7\" (UniqueName: \"kubernetes.io/projected/6b8d3e31-3f6c-4be0-b289-cd5afd6bb142-kube-api-access-lfnq7\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5qrl6\" (UID: \"6b8d3e31-3f6c-4be0-b289-cd5afd6bb142\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.452057 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4z92\" (UniqueName: \"kubernetes.io/projected/65719325-3b5a-4c67-add5-446fbadb2951-kube-api-access-s4z92\") pod \"test-operator-controller-manager-5c5cb9c4d7-djk4r\" (UID: \"65719325-3b5a-4c67-add5-446fbadb2951\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.452077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rmw\" (UniqueName: \"kubernetes.io/projected/a24c7823-20be-4bc5-82cf-fd57d664cb8f-kube-api-access-m2rmw\") pod \"watcher-operator-controller-manager-6dd88c6f67-hgppt\" (UID: \"a24c7823-20be-4bc5-82cf-fd57d664cb8f\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.452100 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnwf\" (UniqueName: \"kubernetes.io/projected/e684de45-1d61-4324-8d52-801b7f2c0b52-kube-api-access-6jnwf\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.475107 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4z92\" (UniqueName: \"kubernetes.io/projected/65719325-3b5a-4c67-add5-446fbadb2951-kube-api-access-s4z92\") pod \"test-operator-controller-manager-5c5cb9c4d7-djk4r\" (UID: \"65719325-3b5a-4c67-add5-446fbadb2951\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.475729 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfnq7\" (UniqueName: \"kubernetes.io/projected/6b8d3e31-3f6c-4be0-b289-cd5afd6bb142-kube-api-access-lfnq7\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5qrl6\" (UID: \"6b8d3e31-3f6c-4be0-b289-cd5afd6bb142\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.481548 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.483682 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4q68\" (UniqueName: \"kubernetes.io/projected/555d3165-c8b4-4bd9-bdc9-2e988734971b-kube-api-access-z4q68\") pod \"swift-operator-controller-manager-677c674df7-c6wkm\" (UID: \"555d3165-c8b4-4bd9-bdc9-2e988734971b\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.489308 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.493839 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.494603 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zsdr2" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.552779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.552839 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzlw2\" (UniqueName: \"kubernetes.io/projected/ddc979ca-b73c-42b1-91a9-baf0f882ccf2-kube-api-access-mzlw2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjzmj\" (UID: \"ddc979ca-b73c-42b1-91a9-baf0f882ccf2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.552875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.552959 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rmw\" (UniqueName: \"kubernetes.io/projected/a24c7823-20be-4bc5-82cf-fd57d664cb8f-kube-api-access-m2rmw\") pod \"watcher-operator-controller-manager-6dd88c6f67-hgppt\" (UID: \"a24c7823-20be-4bc5-82cf-fd57d664cb8f\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.553046 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jnwf\" (UniqueName: \"kubernetes.io/projected/e684de45-1d61-4324-8d52-801b7f2c0b52-kube-api-access-6jnwf\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.553103 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.553129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.553247 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.553294 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert podName:999db9dc-984c-40aa-be0f-1d98b78bf44f nodeName:}" failed. No retries permitted until 2026-03-12 15:04:14.553280204 +0000 UTC m=+1013.197294430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert") pod "infra-operator-controller-manager-5995f4446f-qk7gq" (UID: "999db9dc-984c-40aa-be0f-1d98b78bf44f") : secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.553781 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.553790 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj"] Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.553810 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:14.05379886 +0000 UTC m=+1012.697813086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.577052 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.577144 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:14.077125726 +0000 UTC m=+1012.721139952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "metrics-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.578958 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.595949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.606988 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jnwf\" (UniqueName: \"kubernetes.io/projected/e684de45-1d61-4324-8d52-801b7f2c0b52-kube-api-access-6jnwf\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.611110 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rmw\" (UniqueName: \"kubernetes.io/projected/a24c7823-20be-4bc5-82cf-fd57d664cb8f-kube-api-access-m2rmw\") pod \"watcher-operator-controller-manager-6dd88c6f67-hgppt\" (UID: \"a24c7823-20be-4bc5-82cf-fd57d664cb8f\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.654590 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzlw2\" (UniqueName: \"kubernetes.io/projected/ddc979ca-b73c-42b1-91a9-baf0f882ccf2-kube-api-access-mzlw2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjzmj\" (UID: \"ddc979ca-b73c-42b1-91a9-baf0f882ccf2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.659147 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.681618 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzlw2\" (UniqueName: \"kubernetes.io/projected/ddc979ca-b73c-42b1-91a9-baf0f882ccf2-kube-api-access-mzlw2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjzmj\" (UID: \"ddc979ca-b73c-42b1-91a9-baf0f882ccf2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.688788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.749446 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.788408 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.858736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.858864 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: E0312 15:04:13.858934 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert podName:fbce7e5d-a791-4984-94c9-3bfdc12d70b9 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:14.858917594 +0000 UTC m=+1013.502931820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7b625q" (UID: "fbce7e5d-a791-4984-94c9-3bfdc12d70b9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.873031 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx"] Mar 12 15:04:13 crc kubenswrapper[4832]: I0312 15:04:13.908491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.037186 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.037255 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.061567 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.062929 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.062972 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:15.062958669 +0000 UTC m=+1013.706972895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "webhook-server-cert" not found Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.124239 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.162632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.162795 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.162867 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:15.162851115 +0000 UTC m=+1013.806865341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "metrics-server-cert" not found Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.212832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" event={"ID":"1361287f-20d2-4603-acad-c6b3a79040b2","Type":"ContainerStarted","Data":"0a87566489a219dcd3867ad2694b24a66a930674855a382e1b6ef964518a6cfd"} Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.226574 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm"] Mar 12 15:04:14 crc kubenswrapper[4832]: W0312 15:04:14.231147 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fd6082b_ab5f_434e_9585_2bcc34c7cba9.slice/crio-dcd9af52570a681fc2689e7df1020ec7457668df277241a4b18998697f7b7ca2 WatchSource:0}: Error finding container dcd9af52570a681fc2689e7df1020ec7457668df277241a4b18998697f7b7ca2: Status 404 returned error can't find the container with id dcd9af52570a681fc2689e7df1020ec7457668df277241a4b18998697f7b7ca2 Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.254459 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.566789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.566986 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.567245 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert podName:999db9dc-984c-40aa-be0f-1d98b78bf44f nodeName:}" failed. No retries permitted until 2026-03-12 15:04:16.567229147 +0000 UTC m=+1015.211243373 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert") pod "infra-operator-controller-manager-5995f4446f-qk7gq" (UID: "999db9dc-984c-40aa-be0f-1d98b78bf44f") : secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.690640 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.696973 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.702277 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.708401 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.712536 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6"] Mar 12 15:04:14 crc kubenswrapper[4832]: W0312 15:04:14.714431 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8760bef_6ca3_412a_b8bc_49de609fe9d3.slice/crio-202e6020178f2ef9cd4fae2b109239a25cea9df28f74f7a58699700115b91d09 WatchSource:0}: Error finding container 202e6020178f2ef9cd4fae2b109239a25cea9df28f74f7a58699700115b91d09: Status 404 returned error can't find the container with id 202e6020178f2ef9cd4fae2b109239a25cea9df28f74f7a58699700115b91d09 Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.719598 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.726195 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.730072 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.737721 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.762388 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75"] Mar 12 15:04:14 crc kubenswrapper[4832]: W0312 15:04:14.770092 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3272780b_2460_41f3_bc98_7ff7708bda6f.slice/crio-ab0ad86570d9a9555fea16904b6b37e9a5501eb1052d6bbf4a9e0561d03f57ba WatchSource:0}: Error finding container ab0ad86570d9a9555fea16904b6b37e9a5501eb1052d6bbf4a9e0561d03f57ba: Status 404 returned error can't find the container with id ab0ad86570d9a9555fea16904b6b37e9a5501eb1052d6bbf4a9e0561d03f57ba Mar 12 15:04:14 crc kubenswrapper[4832]: W0312 15:04:14.773674 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d58b640_14cf_4576_b441_448a87e34b04.slice/crio-b86731b928a02df75c05804960e549a3248ada9ccdfcffc38d031224b6f8401a WatchSource:0}: Error finding container b86731b928a02df75c05804960e549a3248ada9ccdfcffc38d031224b6f8401a: Status 404 returned error can't find the container with id b86731b928a02df75c05804960e549a3248ada9ccdfcffc38d031224b6f8401a Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.775657 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.784329 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.789444 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.794384 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt"] Mar 12 15:04:14 crc kubenswrapper[4832]: W0312 15:04:14.797637 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8d3e31_3f6c_4be0_b289_cd5afd6bb142.slice/crio-89cbd8981b1c82a9d99d00b9c6cfe1b9aafcd81a31c7943fb53551c4b2bac759 WatchSource:0}: Error finding container 89cbd8981b1c82a9d99d00b9c6cfe1b9aafcd81a31c7943fb53551c4b2bac759: Status 404 returned error can't find the container with id 89cbd8981b1c82a9d99d00b9c6cfe1b9aafcd81a31c7943fb53551c4b2bac759 Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.800340 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.804198 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv"] Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.822702 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n"] Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.825134 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4q68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-677c674df7-c6wkm_openstack-operators(555d3165-c8b4-4bd9-bdc9-2e988734971b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.825157 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfnq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6cd66dbd4b-5qrl6_openstack-operators(6b8d3e31-3f6c-4be0-b289-cd5afd6bb142): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.826297 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" podUID="555d3165-c8b4-4bd9-bdc9-2e988734971b" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.826371 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" podUID="6b8d3e31-3f6c-4be0-b289-cd5afd6bb142" Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.827571 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj"] Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.833984 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4z92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-djk4r_openstack-operators(65719325-3b5a-4c67-add5-446fbadb2951): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.834006 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m2rmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6dd88c6f67-hgppt_openstack-operators(a24c7823-20be-4bc5-82cf-fd57d664cb8f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.833998 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdrgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-569cc54c5-2z4xv_openstack-operators(78b5b9cf-6e4a-4ac8-8611-06b417453f45): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.834107 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4vxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-qdn75_openstack-operators(55d4a1b5-5971-426d-91dd-9a8f991552c0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.835273 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" podUID="55d4a1b5-5971-426d-91dd-9a8f991552c0" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.835320 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" podUID="65719325-3b5a-4c67-add5-446fbadb2951" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.835404 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" podUID="78b5b9cf-6e4a-4ac8-8611-06b417453f45" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.835423 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" podUID="a24c7823-20be-4bc5-82cf-fd57d664cb8f" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.838336 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mzlw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jjzmj_openstack-operators(ddc979ca-b73c-42b1-91a9-baf0f882ccf2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.838445 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dt5fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-4xd9n_openstack-operators(6c25d60c-d053-4b33-9ddd-8a95f18480f7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.840106 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" podUID="6c25d60c-d053-4b33-9ddd-8a95f18480f7" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.840176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" podUID="ddc979ca-b73c-42b1-91a9-baf0f882ccf2" Mar 12 15:04:14 crc kubenswrapper[4832]: I0312 15:04:14.871097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.871311 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:14 crc kubenswrapper[4832]: E0312 15:04:14.871400 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert podName:fbce7e5d-a791-4984-94c9-3bfdc12d70b9 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:16.871369323 +0000 UTC m=+1015.515383549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7b625q" (UID: "fbce7e5d-a791-4984-94c9-3bfdc12d70b9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.075216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.075384 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.075448 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:17.075424688 +0000 UTC m=+1015.719438914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "webhook-server-cert" not found Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.176476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.176696 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.176799 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:17.176777976 +0000 UTC m=+1015.820792262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "metrics-server-cert" not found Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.231292 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" event={"ID":"f8760bef-6ca3-412a-b8bc-49de609fe9d3","Type":"ContainerStarted","Data":"202e6020178f2ef9cd4fae2b109239a25cea9df28f74f7a58699700115b91d09"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.235453 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" event={"ID":"a24c7823-20be-4bc5-82cf-fd57d664cb8f","Type":"ContainerStarted","Data":"3a3b242d6058ad70019aef2fd95bce6a315773cc8c0719291742e3827ea406bf"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.239341 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" event={"ID":"2a4714d0-f39b-499a-88aa-e960dad0e00b","Type":"ContainerStarted","Data":"12363a700b3ac2594fd66bb81884e51fbbcef725595b854bc3a72708f1440a7c"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.240289 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" podUID="a24c7823-20be-4bc5-82cf-fd57d664cb8f" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.240596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" event={"ID":"65719325-3b5a-4c67-add5-446fbadb2951","Type":"ContainerStarted","Data":"1bd87c078e09b7d5cf32fdd8b9b54814af0908713f2e989631f1b75214dc1336"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.249891 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" podUID="65719325-3b5a-4c67-add5-446fbadb2951" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.250760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" event={"ID":"555d3165-c8b4-4bd9-bdc9-2e988734971b","Type":"ContainerStarted","Data":"4d668e8becd801c964a61cd3d547786f90257d8fa5a7d366d1699da81a6d2285"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.252554 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" podUID="555d3165-c8b4-4bd9-bdc9-2e988734971b" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.253341 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" event={"ID":"6c25d60c-d053-4b33-9ddd-8a95f18480f7","Type":"ContainerStarted","Data":"ebd2219b2c0ebfd1b3e0ec0599e4e35dfa79065b92a2651a90037534cb352704"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.255630 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" podUID="6c25d60c-d053-4b33-9ddd-8a95f18480f7" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.256483 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" event={"ID":"6b8d3e31-3f6c-4be0-b289-cd5afd6bb142","Type":"ContainerStarted","Data":"89cbd8981b1c82a9d99d00b9c6cfe1b9aafcd81a31c7943fb53551c4b2bac759"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.258013 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" podUID="6b8d3e31-3f6c-4be0-b289-cd5afd6bb142" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.258751 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" event={"ID":"3d58b640-14cf-4576-b441-448a87e34b04","Type":"ContainerStarted","Data":"b86731b928a02df75c05804960e549a3248ada9ccdfcffc38d031224b6f8401a"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.262913 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" event={"ID":"e636b173-51b8-4325-a1cf-dcea5406cdee","Type":"ContainerStarted","Data":"446fda4b802b3af7a954dc549d6e65991c379ec1f8f860411589636876fe9645"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.264588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" event={"ID":"55d4a1b5-5971-426d-91dd-9a8f991552c0","Type":"ContainerStarted","Data":"7d1c7a1f280e8191fac5b7a57094e4411f71591a7f6583a2b0eee36aa2d023c3"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.266174 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" podUID="55d4a1b5-5971-426d-91dd-9a8f991552c0" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.268071 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" event={"ID":"6fe61dcf-24b0-4c97-9639-15335615d4d4","Type":"ContainerStarted","Data":"355f08ad59db1ea480c99f6edd9cf09ab5033cbe18506447ef61e423912933c8"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.275793 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" event={"ID":"ddc979ca-b73c-42b1-91a9-baf0f882ccf2","Type":"ContainerStarted","Data":"c92d0257c7dc534a3a9a8a748c1688afe8cd8bf20328cc62e717efe55f5b8f12"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.277239 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" podUID="ddc979ca-b73c-42b1-91a9-baf0f882ccf2" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.277642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" event={"ID":"5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b","Type":"ContainerStarted","Data":"cd8ef2c00df4ed900b996b8b565a9ecee318437ffff09878d6b4fd70431815a9"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.288203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" event={"ID":"572740f5-e207-4372-ab19-2b117aa31c69","Type":"ContainerStarted","Data":"ef16ea7b9059616f97e093bbef355d746c66a2e99c857b5f113bd34c6096fa0e"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.290196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" event={"ID":"78b5b9cf-6e4a-4ac8-8611-06b417453f45","Type":"ContainerStarted","Data":"391fa66afb2a97837080e2656862f956c34d0ba82aa41c60950b12dc510b37d9"} Mar 12 15:04:15 crc kubenswrapper[4832]: E0312 15:04:15.293125 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" podUID="78b5b9cf-6e4a-4ac8-8611-06b417453f45" Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.294924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" event={"ID":"3272780b-2460-41f3-bc98-7ff7708bda6f","Type":"ContainerStarted","Data":"ab0ad86570d9a9555fea16904b6b37e9a5501eb1052d6bbf4a9e0561d03f57ba"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.299352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" event={"ID":"3fd6082b-ab5f-434e-9585-2bcc34c7cba9","Type":"ContainerStarted","Data":"dcd9af52570a681fc2689e7df1020ec7457668df277241a4b18998697f7b7ca2"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.306916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" event={"ID":"f258cf7f-c099-40f8-94be-4e0ec5252d88","Type":"ContainerStarted","Data":"48a9d6948775b9a828cc1edfaed1290c3e5d62e1dd969a74a4b435cb9b81708c"} Mar 12 15:04:15 crc kubenswrapper[4832]: I0312 15:04:15.308383 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" event={"ID":"6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f","Type":"ContainerStarted","Data":"922184bc9369ddd4be13d7c986ddd67277713718841af2f16c151fb7cc651f50"} Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.319041 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" podUID="ddc979ca-b73c-42b1-91a9-baf0f882ccf2" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.321712 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" podUID="a24c7823-20be-4bc5-82cf-fd57d664cb8f" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.321744 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" podUID="65719325-3b5a-4c67-add5-446fbadb2951" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.321803 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" podUID="6c25d60c-d053-4b33-9ddd-8a95f18480f7" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.321838 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" podUID="6b8d3e31-3f6c-4be0-b289-cd5afd6bb142" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.322183 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" podUID="55d4a1b5-5971-426d-91dd-9a8f991552c0" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.323256 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" podUID="78b5b9cf-6e4a-4ac8-8611-06b417453f45" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.323403 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" podUID="555d3165-c8b4-4bd9-bdc9-2e988734971b" Mar 12 15:04:16 crc kubenswrapper[4832]: I0312 15:04:16.517415 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5shw"] Mar 12 15:04:16 crc kubenswrapper[4832]: I0312 15:04:16.517701 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r5shw" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="registry-server" containerID="cri-o://463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486" gracePeriod=2 Mar 12 15:04:16 crc kubenswrapper[4832]: I0312 15:04:16.602696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.602868 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.602913 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert podName:999db9dc-984c-40aa-be0f-1d98b78bf44f nodeName:}" failed. No retries permitted until 2026-03-12 15:04:20.602899547 +0000 UTC m=+1019.246913773 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert") pod "infra-operator-controller-manager-5995f4446f-qk7gq" (UID: "999db9dc-984c-40aa-be0f-1d98b78bf44f") : secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:16 crc kubenswrapper[4832]: I0312 15:04:16.909903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.910365 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:16 crc kubenswrapper[4832]: E0312 15:04:16.910593 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert podName:fbce7e5d-a791-4984-94c9-3bfdc12d70b9 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:20.910560395 +0000 UTC m=+1019.554574621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7b625q" (UID: "fbce7e5d-a791-4984-94c9-3bfdc12d70b9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.099413 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.120214 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:17 crc kubenswrapper[4832]: E0312 15:04:17.120351 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 15:04:17 crc kubenswrapper[4832]: E0312 15:04:17.120412 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:21.120396778 +0000 UTC m=+1019.764411004 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "webhook-server-cert" not found Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.222215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-utilities\") pod \"44167a86-7de7-4855-9e75-3f04b5e446fe\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.222354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-catalog-content\") pod \"44167a86-7de7-4855-9e75-3f04b5e446fe\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.222388 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7m7z\" (UniqueName: \"kubernetes.io/projected/44167a86-7de7-4855-9e75-3f04b5e446fe-kube-api-access-t7m7z\") pod \"44167a86-7de7-4855-9e75-3f04b5e446fe\" (UID: \"44167a86-7de7-4855-9e75-3f04b5e446fe\") " Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.222639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:17 crc kubenswrapper[4832]: E0312 15:04:17.222792 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 15:04:17 crc kubenswrapper[4832]: E0312 15:04:17.222861 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:21.222840218 +0000 UTC m=+1019.866854444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "metrics-server-cert" not found Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.223265 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-utilities" (OuterVolumeSpecName: "utilities") pod "44167a86-7de7-4855-9e75-3f04b5e446fe" (UID: "44167a86-7de7-4855-9e75-3f04b5e446fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.223604 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.256641 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44167a86-7de7-4855-9e75-3f04b5e446fe-kube-api-access-t7m7z" (OuterVolumeSpecName: "kube-api-access-t7m7z") pod "44167a86-7de7-4855-9e75-3f04b5e446fe" (UID: "44167a86-7de7-4855-9e75-3f04b5e446fe"). InnerVolumeSpecName "kube-api-access-t7m7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.261583 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44167a86-7de7-4855-9e75-3f04b5e446fe" (UID: "44167a86-7de7-4855-9e75-3f04b5e446fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.324563 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44167a86-7de7-4855-9e75-3f04b5e446fe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.324596 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7m7z\" (UniqueName: \"kubernetes.io/projected/44167a86-7de7-4855-9e75-3f04b5e446fe-kube-api-access-t7m7z\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.333568 4832 generic.go:334] "Generic (PLEG): container finished" podID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerID="463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486" exitCode=0 Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.333620 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5shw" event={"ID":"44167a86-7de7-4855-9e75-3f04b5e446fe","Type":"ContainerDied","Data":"463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486"} Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.333651 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5shw" event={"ID":"44167a86-7de7-4855-9e75-3f04b5e446fe","Type":"ContainerDied","Data":"fe1d33e5e1357305e73c804e13cae6bc9e8bdd4cdffeff2cd303f4a1393bf7c0"} Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.333653 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5shw" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.333670 4832 scope.go:117] "RemoveContainer" containerID="463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.366310 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5shw"] Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.397553 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5shw"] Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.400773 4832 scope.go:117] "RemoveContainer" containerID="1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6" Mar 12 15:04:17 crc kubenswrapper[4832]: I0312 15:04:17.453386 4832 scope.go:117] "RemoveContainer" containerID="b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015" Mar 12 15:04:18 crc kubenswrapper[4832]: I0312 15:04:18.629169 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" path="/var/lib/kubelet/pods/44167a86-7de7-4855-9e75-3f04b5e446fe/volumes" Mar 12 15:04:20 crc kubenswrapper[4832]: I0312 15:04:20.688629 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:20 crc kubenswrapper[4832]: E0312 15:04:20.688859 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:20 crc kubenswrapper[4832]: E0312 15:04:20.688989 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert podName:999db9dc-984c-40aa-be0f-1d98b78bf44f nodeName:}" failed. No retries permitted until 2026-03-12 15:04:28.688943454 +0000 UTC m=+1027.332957680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert") pod "infra-operator-controller-manager-5995f4446f-qk7gq" (UID: "999db9dc-984c-40aa-be0f-1d98b78bf44f") : secret "infra-operator-webhook-server-cert" not found Mar 12 15:04:20 crc kubenswrapper[4832]: I0312 15:04:20.993438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:20 crc kubenswrapper[4832]: E0312 15:04:20.993981 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:20 crc kubenswrapper[4832]: E0312 15:04:20.994039 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert podName:fbce7e5d-a791-4984-94c9-3bfdc12d70b9 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:28.994020887 +0000 UTC m=+1027.638035113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7b625q" (UID: "fbce7e5d-a791-4984-94c9-3bfdc12d70b9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 15:04:21 crc kubenswrapper[4832]: I0312 15:04:21.196876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:21 crc kubenswrapper[4832]: E0312 15:04:21.197052 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 15:04:21 crc kubenswrapper[4832]: E0312 15:04:21.197118 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:29.197098784 +0000 UTC m=+1027.841113010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "webhook-server-cert" not found Mar 12 15:04:21 crc kubenswrapper[4832]: I0312 15:04:21.298634 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:21 crc kubenswrapper[4832]: E0312 15:04:21.298794 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 15:04:21 crc kubenswrapper[4832]: E0312 15:04:21.298865 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs podName:e684de45-1d61-4324-8d52-801b7f2c0b52 nodeName:}" failed. No retries permitted until 2026-03-12 15:04:29.298849734 +0000 UTC m=+1027.942863950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs") pod "openstack-operator-controller-manager-7d46bf84bd-7lz8z" (UID: "e684de45-1d61-4324-8d52-801b7f2c0b52") : secret "metrics-server-cert" not found Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.198983 4832 scope.go:117] "RemoveContainer" containerID="463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486" Mar 12 15:04:26 crc kubenswrapper[4832]: E0312 15:04:26.199767 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486\": container with ID starting with 463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486 not found: ID does not exist" containerID="463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486" Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.199803 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486"} err="failed to get container status \"463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486\": rpc error: code = NotFound desc = could not find container \"463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486\": container with ID starting with 463b256eb4ba6b328542f9e84e5e4f94f00dcaa690ee6f4064eac452ff1f6486 not found: ID does not exist" Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.199827 4832 scope.go:117] "RemoveContainer" containerID="1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6" Mar 12 15:04:26 crc kubenswrapper[4832]: E0312 15:04:26.200294 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6\": container with ID starting with 1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6 not found: ID does not exist" containerID="1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6" Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.200319 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6"} err="failed to get container status \"1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6\": rpc error: code = NotFound desc = could not find container \"1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6\": container with ID starting with 1e2dfb7b2466626b39cc8e82ecee6946286ae0c48539b19b86b594534dc2dbf6 not found: ID does not exist" Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.200334 4832 scope.go:117] "RemoveContainer" containerID="b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015" Mar 12 15:04:26 crc kubenswrapper[4832]: E0312 15:04:26.201126 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015\": container with ID starting with b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015 not found: ID does not exist" containerID="b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015" Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.201168 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015"} err="failed to get container status \"b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015\": rpc error: code = NotFound desc = could not find container \"b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015\": container with ID starting with b412c7526e324737d0bc51bbb48700699eb8b40da1d694da8b31917bb0ea4015 not found: ID does not exist" Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.314568 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:04:26 crc kubenswrapper[4832]: I0312 15:04:26.314811 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.404316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" event={"ID":"2a4714d0-f39b-499a-88aa-e960dad0e00b","Type":"ContainerStarted","Data":"10ee8c9927136e5dd74a7b90674383ee6a82f0397aead9805dce921e87812668"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.404793 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.406778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" event={"ID":"5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b","Type":"ContainerStarted","Data":"87eed151db6f5dc3f09eec20c8dc6e3458ea17866cb33bc05ce6eac1f95afeb1"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.407202 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.409044 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" event={"ID":"572740f5-e207-4372-ab19-2b117aa31c69","Type":"ContainerStarted","Data":"7e5bfac91f9ae0ff26bc27145b9a145e2f54730fc74c0a3a4d3633523d867cdf"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.409419 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.410694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" event={"ID":"3fd6082b-ab5f-434e-9585-2bcc34c7cba9","Type":"ContainerStarted","Data":"308c3972ca824974c5d3022bfc8f863000caba4f9b63b0212563d10ccf4d4dc7"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.411023 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.412354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" event={"ID":"3d58b640-14cf-4576-b441-448a87e34b04","Type":"ContainerStarted","Data":"94b54c6c858938dfa262e70686dc28608f167ee4d1576029fc07d91ef25a39f4"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.412747 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.414041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" event={"ID":"f8760bef-6ca3-412a-b8bc-49de609fe9d3","Type":"ContainerStarted","Data":"19a9ee93f8275d1010ababafea10a7780ca07f05d4dbe0d2bf0955a5cc9ed8f1"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.414384 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.415898 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" event={"ID":"e636b173-51b8-4325-a1cf-dcea5406cdee","Type":"ContainerStarted","Data":"a984a63439ec6249bc42565a8c17e8667436b0b9c3273747bf51cf8acd9130e4"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.416222 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.417741 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" event={"ID":"3272780b-2460-41f3-bc98-7ff7708bda6f","Type":"ContainerStarted","Data":"8d1bfaefd2559cbf83f427e110635f0dd65f97de1e7fbebf4ab58acce16623bc"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.418487 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.419668 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" podStartSLOduration=3.447238547 podStartE2EDuration="15.419655334s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.776870424 +0000 UTC m=+1013.420884650" lastFinishedPulling="2026-03-12 15:04:26.749287211 +0000 UTC m=+1025.393301437" observedRunningTime="2026-03-12 15:04:27.418963294 +0000 UTC m=+1026.062977530" watchObservedRunningTime="2026-03-12 15:04:27.419655334 +0000 UTC m=+1026.063669560" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.420675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" event={"ID":"6fe61dcf-24b0-4c97-9639-15335615d4d4","Type":"ContainerStarted","Data":"e4448b0fccd918ede856c827a363bb35c25af882516650389d42dcc1cbdf4594"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.421200 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.422968 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" event={"ID":"f258cf7f-c099-40f8-94be-4e0ec5252d88","Type":"ContainerStarted","Data":"2d458485cd1d6297f05b8e40540e2eb4b6023ce0988642b9dbd8d8e546c801b0"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.423432 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.425073 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" event={"ID":"6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f","Type":"ContainerStarted","Data":"f6f85b8ac85067eba046850c6ca552eb8f4f488bd9a30db776502ef8e61a8e1e"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.425420 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.427599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" event={"ID":"1361287f-20d2-4603-acad-c6b3a79040b2","Type":"ContainerStarted","Data":"1291d5290847604e96d6663b6151ba3131bca404dafb3333c25c14c275a98219"} Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.428029 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.491251 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" podStartSLOduration=3.5583237370000003 podStartE2EDuration="15.491229888s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.776884115 +0000 UTC m=+1013.420898341" lastFinishedPulling="2026-03-12 15:04:26.709790266 +0000 UTC m=+1025.353804492" observedRunningTime="2026-03-12 15:04:27.48714763 +0000 UTC m=+1026.131161846" watchObservedRunningTime="2026-03-12 15:04:27.491229888 +0000 UTC m=+1026.135244114" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.537121 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" podStartSLOduration=3.082408961 podStartE2EDuration="15.537106668s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.233840823 +0000 UTC m=+1012.877855049" lastFinishedPulling="2026-03-12 15:04:26.68853853 +0000 UTC m=+1025.332552756" observedRunningTime="2026-03-12 15:04:27.537000505 +0000 UTC m=+1026.181014771" watchObservedRunningTime="2026-03-12 15:04:27.537106668 +0000 UTC m=+1026.181120894" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.594795 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" podStartSLOduration=3.552765146 podStartE2EDuration="15.59477417s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.794983249 +0000 UTC m=+1013.438997475" lastFinishedPulling="2026-03-12 15:04:26.836992263 +0000 UTC m=+1025.481006499" observedRunningTime="2026-03-12 15:04:27.594394869 +0000 UTC m=+1026.238409095" watchObservedRunningTime="2026-03-12 15:04:27.59477417 +0000 UTC m=+1026.238788396" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.656929 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" podStartSLOduration=3.765048599 podStartE2EDuration="15.656910701s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.797326667 +0000 UTC m=+1013.441340893" lastFinishedPulling="2026-03-12 15:04:26.689188769 +0000 UTC m=+1025.333202995" observedRunningTime="2026-03-12 15:04:27.627815908 +0000 UTC m=+1026.271830134" watchObservedRunningTime="2026-03-12 15:04:27.656910701 +0000 UTC m=+1026.300924917" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.665400 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" podStartSLOduration=3.681246241 podStartE2EDuration="15.665385077s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.72565377 +0000 UTC m=+1013.369667996" lastFinishedPulling="2026-03-12 15:04:26.709792606 +0000 UTC m=+1025.353806832" observedRunningTime="2026-03-12 15:04:27.662661418 +0000 UTC m=+1026.306675644" watchObservedRunningTime="2026-03-12 15:04:27.665385077 +0000 UTC m=+1026.309399303" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.704362 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" podStartSLOduration=3.76610638 podStartE2EDuration="15.704345826s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.760610753 +0000 UTC m=+1013.404624979" lastFinishedPulling="2026-03-12 15:04:26.698850199 +0000 UTC m=+1025.342864425" observedRunningTime="2026-03-12 15:04:27.697644182 +0000 UTC m=+1026.341658408" watchObservedRunningTime="2026-03-12 15:04:27.704345826 +0000 UTC m=+1026.348360052" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.735889 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" podStartSLOduration=3.745599366 podStartE2EDuration="15.73586976s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.773148946 +0000 UTC m=+1013.417163172" lastFinishedPulling="2026-03-12 15:04:26.76341934 +0000 UTC m=+1025.407433566" observedRunningTime="2026-03-12 15:04:27.732399589 +0000 UTC m=+1026.376413815" watchObservedRunningTime="2026-03-12 15:04:27.73586976 +0000 UTC m=+1026.379883986" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.775367 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" podStartSLOduration=3.8278798910000003 podStartE2EDuration="15.775354125s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.767007278 +0000 UTC m=+1013.411021504" lastFinishedPulling="2026-03-12 15:04:26.714481512 +0000 UTC m=+1025.358495738" observedRunningTime="2026-03-12 15:04:27.773987115 +0000 UTC m=+1026.418001341" watchObservedRunningTime="2026-03-12 15:04:27.775354125 +0000 UTC m=+1026.419368351" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.776546 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" podStartSLOduration=3.829263021 podStartE2EDuration="15.776541749s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.741965712 +0000 UTC m=+1013.385979938" lastFinishedPulling="2026-03-12 15:04:26.68924442 +0000 UTC m=+1025.333258666" observedRunningTime="2026-03-12 15:04:27.755676044 +0000 UTC m=+1026.399690270" watchObservedRunningTime="2026-03-12 15:04:27.776541749 +0000 UTC m=+1026.420555985" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.801092 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" podStartSLOduration=2.998958412 podStartE2EDuration="15.80107609s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:13.908258715 +0000 UTC m=+1012.552272941" lastFinishedPulling="2026-03-12 15:04:26.710376393 +0000 UTC m=+1025.354390619" observedRunningTime="2026-03-12 15:04:27.796395595 +0000 UTC m=+1026.440409821" watchObservedRunningTime="2026-03-12 15:04:27.80107609 +0000 UTC m=+1026.445090316" Mar 12 15:04:27 crc kubenswrapper[4832]: I0312 15:04:27.827907 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" podStartSLOduration=3.861050672 podStartE2EDuration="15.827885557s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.74292954 +0000 UTC m=+1013.386943766" lastFinishedPulling="2026-03-12 15:04:26.709764425 +0000 UTC m=+1025.353778651" observedRunningTime="2026-03-12 15:04:27.82727999 +0000 UTC m=+1026.471294216" watchObservedRunningTime="2026-03-12 15:04:27.827885557 +0000 UTC m=+1026.471899783" Mar 12 15:04:28 crc kubenswrapper[4832]: I0312 15:04:28.728254 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:28 crc kubenswrapper[4832]: I0312 15:04:28.736421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/999db9dc-984c-40aa-be0f-1d98b78bf44f-cert\") pod \"infra-operator-controller-manager-5995f4446f-qk7gq\" (UID: \"999db9dc-984c-40aa-be0f-1d98b78bf44f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.017383 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8qdtc" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.027090 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.032558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.052362 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbce7e5d-a791-4984-94c9-3bfdc12d70b9-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7b625q\" (UID: \"fbce7e5d-a791-4984-94c9-3bfdc12d70b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.106439 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzdwp" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.115663 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.241191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.259362 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-webhook-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.343470 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.347155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e684de45-1d61-4324-8d52-801b7f2c0b52-metrics-certs\") pod \"openstack-operator-controller-manager-7d46bf84bd-7lz8z\" (UID: \"e684de45-1d61-4324-8d52-801b7f2c0b52\") " pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.404869 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-c46tn" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.413608 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.581447 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq"] Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.621444 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z"] Mar 12 15:04:29 crc kubenswrapper[4832]: W0312 15:04:29.637575 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode684de45_1d61_4324_8d52_801b7f2c0b52.slice/crio-7db5af29e4462ae1250cebc1bb869b5383e2c6cbb5959f5a2f42d48529a2e44f WatchSource:0}: Error finding container 7db5af29e4462ae1250cebc1bb869b5383e2c6cbb5959f5a2f42d48529a2e44f: Status 404 returned error can't find the container with id 7db5af29e4462ae1250cebc1bb869b5383e2c6cbb5959f5a2f42d48529a2e44f Mar 12 15:04:29 crc kubenswrapper[4832]: I0312 15:04:29.677811 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q"] Mar 12 15:04:30 crc kubenswrapper[4832]: W0312 15:04:30.355347 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbce7e5d_a791_4984_94c9_3bfdc12d70b9.slice/crio-73df7f2a02f48c499161471d04269cacce0dbac2ea0469fb0c10851fa315b2ff WatchSource:0}: Error finding container 73df7f2a02f48c499161471d04269cacce0dbac2ea0469fb0c10851fa315b2ff: Status 404 returned error can't find the container with id 73df7f2a02f48c499161471d04269cacce0dbac2ea0469fb0c10851fa315b2ff Mar 12 15:04:30 crc kubenswrapper[4832]: I0312 15:04:30.451423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" event={"ID":"fbce7e5d-a791-4984-94c9-3bfdc12d70b9","Type":"ContainerStarted","Data":"73df7f2a02f48c499161471d04269cacce0dbac2ea0469fb0c10851fa315b2ff"} Mar 12 15:04:30 crc kubenswrapper[4832]: I0312 15:04:30.452564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" event={"ID":"e684de45-1d61-4324-8d52-801b7f2c0b52","Type":"ContainerStarted","Data":"7db5af29e4462ae1250cebc1bb869b5383e2c6cbb5959f5a2f42d48529a2e44f"} Mar 12 15:04:30 crc kubenswrapper[4832]: I0312 15:04:30.454263 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" event={"ID":"999db9dc-984c-40aa-be0f-1d98b78bf44f","Type":"ContainerStarted","Data":"b13ef01bc60f5466cb8b5989858d9cf152d3c8aacebf0aa1393c48712e2f0b01"} Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.472098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" event={"ID":"a24c7823-20be-4bc5-82cf-fd57d664cb8f","Type":"ContainerStarted","Data":"ebaebfea6dd9a7d801378028de52ab32479375a2fbc0d56e864ea56a8517ac52"} Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.472641 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.492394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" event={"ID":"55d4a1b5-5971-426d-91dd-9a8f991552c0","Type":"ContainerStarted","Data":"366c9803b8b57006b9aabe2904afe7c329e0bc3c547988397a2ae4aee8afcf68"} Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.492999 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.504335 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" event={"ID":"e684de45-1d61-4324-8d52-801b7f2c0b52","Type":"ContainerStarted","Data":"8809e922d26e98a00634bb49ea9d910aeb2bb6941f3f3a117ae056f6542c4801"} Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.504941 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.507040 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" podStartSLOduration=2.3918138 podStartE2EDuration="18.507023188s" podCreationTimestamp="2026-03-12 15:04:13 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.833879657 +0000 UTC m=+1013.477893883" lastFinishedPulling="2026-03-12 15:04:30.949089045 +0000 UTC m=+1029.593103271" observedRunningTime="2026-03-12 15:04:31.498423109 +0000 UTC m=+1030.142437335" watchObservedRunningTime="2026-03-12 15:04:31.507023188 +0000 UTC m=+1030.151037414" Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.531078 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" podStartSLOduration=3.415805686 podStartE2EDuration="19.531015044s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.834045612 +0000 UTC m=+1013.478059838" lastFinishedPulling="2026-03-12 15:04:30.94925497 +0000 UTC m=+1029.593269196" observedRunningTime="2026-03-12 15:04:31.526995567 +0000 UTC m=+1030.171009793" watchObservedRunningTime="2026-03-12 15:04:31.531015044 +0000 UTC m=+1030.175029270" Mar 12 15:04:31 crc kubenswrapper[4832]: I0312 15:04:31.577380 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" podStartSLOduration=18.577365927 podStartE2EDuration="18.577365927s" podCreationTimestamp="2026-03-12 15:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:04:31.576953785 +0000 UTC m=+1030.220968011" watchObservedRunningTime="2026-03-12 15:04:31.577365927 +0000 UTC m=+1030.221380143" Mar 12 15:04:32 crc kubenswrapper[4832]: I0312 15:04:32.879735 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-ckrwx" Mar 12 15:04:32 crc kubenswrapper[4832]: I0312 15:04:32.936972 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rnjmc" Mar 12 15:04:32 crc kubenswrapper[4832]: I0312 15:04:32.957320 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-sxlwm" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.026276 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-2vj9s" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.068540 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-2sljf" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.105550 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-r6kt4" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.141801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-jr5hd" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.147171 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-8nlxs" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.222298 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-lvnqm" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.317192 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-9sw66" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.324000 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-8ngt8" Mar 12 15:04:33 crc kubenswrapper[4832]: I0312 15:04:33.554267 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-6t9gx" Mar 12 15:04:39 crc kubenswrapper[4832]: I0312 15:04:39.426367 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d46bf84bd-7lz8z" Mar 12 15:04:41 crc kubenswrapper[4832]: I0312 15:04:41.509066 4832 scope.go:117] "RemoveContainer" containerID="93932f73c89129855c3402f73985d66772d09b93854877301c0e466bbc9ef912" Mar 12 15:04:43 crc kubenswrapper[4832]: I0312 15:04:43.501125 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-qdn75" Mar 12 15:04:43 crc kubenswrapper[4832]: I0312 15:04:43.790756 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hgppt" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.611738 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" event={"ID":"78b5b9cf-6e4a-4ac8-8611-06b417453f45","Type":"ContainerStarted","Data":"751c5a3a8c13d1efd31c8e584260776eac75e31af98e43369ad22dc92d10c360"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.612208 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.613203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" event={"ID":"555d3165-c8b4-4bd9-bdc9-2e988734971b","Type":"ContainerStarted","Data":"05e0fc290ee7ec76ffeeb355b1a98c799f78ebcf0f2efa451afd09d509af0065"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.613416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.614882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" event={"ID":"6c25d60c-d053-4b33-9ddd-8a95f18480f7","Type":"ContainerStarted","Data":"3a5fd774768ff4b80983057907879e184d05d71de364a0d80100be8c5d19a55e"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.615008 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.615979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" event={"ID":"fbce7e5d-a791-4984-94c9-3bfdc12d70b9","Type":"ContainerStarted","Data":"1c52a512211c1842741ecf545e741b6418cd65b56b7cd90323f2a1f4c9df9327"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.616097 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.617602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" event={"ID":"6b8d3e31-3f6c-4be0-b289-cd5afd6bb142","Type":"ContainerStarted","Data":"cc4a43fe88c2aee60a2984ae742ba5e50d6d4631d058da2891ee4ca42ee0aff5"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.617782 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.633243 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" event={"ID":"ddc979ca-b73c-42b1-91a9-baf0f882ccf2","Type":"ContainerStarted","Data":"efff6d085ffb9a6a353c67b2c5cd24d72f56d0d67103bd2723b314df19b411d2"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.633286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" event={"ID":"999db9dc-984c-40aa-be0f-1d98b78bf44f","Type":"ContainerStarted","Data":"93a9588fb489025a1dbae808998bd58ffd360fd8d6cdddbc1646db3adee029dd"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.633312 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.633325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" event={"ID":"65719325-3b5a-4c67-add5-446fbadb2951","Type":"ContainerStarted","Data":"2135b2593b0bcd84a5c7a509e5f021d0fbe0cf5498b51536bcb1c13e0f4936ab"} Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.633961 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.645106 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" podStartSLOduration=4.255132295 podStartE2EDuration="32.645086704s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.833880647 +0000 UTC m=+1013.477894883" lastFinishedPulling="2026-03-12 15:04:43.223835066 +0000 UTC m=+1041.867849292" observedRunningTime="2026-03-12 15:04:44.644710213 +0000 UTC m=+1043.288724439" watchObservedRunningTime="2026-03-12 15:04:44.645086704 +0000 UTC m=+1043.289100930" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.660702 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" podStartSLOduration=4.210227214 podStartE2EDuration="32.660685286s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.838387768 +0000 UTC m=+1013.482401994" lastFinishedPulling="2026-03-12 15:04:43.28884581 +0000 UTC m=+1041.932860066" observedRunningTime="2026-03-12 15:04:44.658856653 +0000 UTC m=+1043.302870899" watchObservedRunningTime="2026-03-12 15:04:44.660685286 +0000 UTC m=+1043.304699512" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.676344 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjzmj" podStartSLOduration=3.206451895 podStartE2EDuration="31.67632163s" podCreationTimestamp="2026-03-12 15:04:13 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.838218563 +0000 UTC m=+1013.482232789" lastFinishedPulling="2026-03-12 15:04:43.308088288 +0000 UTC m=+1041.952102524" observedRunningTime="2026-03-12 15:04:44.669400639 +0000 UTC m=+1043.313414865" watchObservedRunningTime="2026-03-12 15:04:44.67632163 +0000 UTC m=+1043.320335876" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.706798 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" podStartSLOduration=19.840315199 podStartE2EDuration="32.706778263s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:30.357541837 +0000 UTC m=+1029.001556063" lastFinishedPulling="2026-03-12 15:04:43.224004901 +0000 UTC m=+1041.868019127" observedRunningTime="2026-03-12 15:04:44.69116563 +0000 UTC m=+1043.335179856" watchObservedRunningTime="2026-03-12 15:04:44.706778263 +0000 UTC m=+1043.350792489" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.709144 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" podStartSLOduration=3.28045697 podStartE2EDuration="31.709138301s" podCreationTimestamp="2026-03-12 15:04:13 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.82500894 +0000 UTC m=+1013.469023166" lastFinishedPulling="2026-03-12 15:04:43.253690281 +0000 UTC m=+1041.897704497" observedRunningTime="2026-03-12 15:04:44.703714674 +0000 UTC m=+1043.347728900" watchObservedRunningTime="2026-03-12 15:04:44.709138301 +0000 UTC m=+1043.353152527" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.721082 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" podStartSLOduration=3.292483738 podStartE2EDuration="31.721063877s" podCreationTimestamp="2026-03-12 15:04:13 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.82502549 +0000 UTC m=+1013.469039716" lastFinishedPulling="2026-03-12 15:04:43.253605629 +0000 UTC m=+1041.897619855" observedRunningTime="2026-03-12 15:04:44.715797504 +0000 UTC m=+1043.359811750" watchObservedRunningTime="2026-03-12 15:04:44.721063877 +0000 UTC m=+1043.365078103" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.760821 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" podStartSLOduration=19.099101773 podStartE2EDuration="32.760798919s" podCreationTimestamp="2026-03-12 15:04:12 +0000 UTC" firstStartedPulling="2026-03-12 15:04:29.591791189 +0000 UTC m=+1028.235805415" lastFinishedPulling="2026-03-12 15:04:43.253488315 +0000 UTC m=+1041.897502561" observedRunningTime="2026-03-12 15:04:44.752339943 +0000 UTC m=+1043.396354169" watchObservedRunningTime="2026-03-12 15:04:44.760798919 +0000 UTC m=+1043.404813145" Mar 12 15:04:44 crc kubenswrapper[4832]: I0312 15:04:44.774372 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" podStartSLOduration=3.354755003 podStartE2EDuration="31.774354621s" podCreationTimestamp="2026-03-12 15:04:13 +0000 UTC" firstStartedPulling="2026-03-12 15:04:14.833837186 +0000 UTC m=+1013.477851412" lastFinishedPulling="2026-03-12 15:04:43.253436804 +0000 UTC m=+1041.897451030" observedRunningTime="2026-03-12 15:04:44.770695135 +0000 UTC m=+1043.414709361" watchObservedRunningTime="2026-03-12 15:04:44.774354621 +0000 UTC m=+1043.418368847" Mar 12 15:04:49 crc kubenswrapper[4832]: I0312 15:04:49.034235 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-qk7gq" Mar 12 15:04:49 crc kubenswrapper[4832]: I0312 15:04:49.121263 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7b625q" Mar 12 15:04:53 crc kubenswrapper[4832]: I0312 15:04:53.581746 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5qrl6" Mar 12 15:04:53 crc kubenswrapper[4832]: I0312 15:04:53.601670 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-c6wkm" Mar 12 15:04:53 crc kubenswrapper[4832]: I0312 15:04:53.661348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-2z4xv" Mar 12 15:04:53 crc kubenswrapper[4832]: I0312 15:04:53.695166 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-4xd9n" Mar 12 15:04:53 crc kubenswrapper[4832]: I0312 15:04:53.752012 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-djk4r" Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.314322 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.314795 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.314862 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.315734 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06828ac09576ddc400ed9fc2eeb342e216438273fb168f6b943b24fb1b40966f"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.315818 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://06828ac09576ddc400ed9fc2eeb342e216438273fb168f6b943b24fb1b40966f" gracePeriod=600 Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.748529 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="06828ac09576ddc400ed9fc2eeb342e216438273fb168f6b943b24fb1b40966f" exitCode=0 Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.748651 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"06828ac09576ddc400ed9fc2eeb342e216438273fb168f6b943b24fb1b40966f"} Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.748882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"b5ef664f80b54949b800723b7418c7329f135481aab0e1a581de0cbcca235b5b"} Mar 12 15:04:56 crc kubenswrapper[4832]: I0312 15:04:56.748909 4832 scope.go:117] "RemoveContainer" containerID="cb2ae425dc6888cba35a19e88d8e8d25fe43467e7d813212653d18ffcd00311f" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.983761 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cr8l7"] Mar 12 15:05:10 crc kubenswrapper[4832]: E0312 15:05:10.984814 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="registry-server" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.984895 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="registry-server" Mar 12 15:05:10 crc kubenswrapper[4832]: E0312 15:05:10.984907 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="extract-utilities" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.984913 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="extract-utilities" Mar 12 15:05:10 crc kubenswrapper[4832]: E0312 15:05:10.984929 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="extract-content" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.984936 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="extract-content" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.988345 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="44167a86-7de7-4855-9e75-3f04b5e446fe" containerName="registry-server" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.989044 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.991630 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.991666 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.991782 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 12 15:05:10 crc kubenswrapper[4832]: I0312 15:05:10.991957 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nm2sr" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.011573 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cr8l7"] Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.044986 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5x424"] Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.047368 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.049782 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.061186 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5x424"] Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.154788 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8sp9\" (UniqueName: \"kubernetes.io/projected/43d90eb2-2118-41b1-9394-a593a3c22cad-kube-api-access-w8sp9\") pod \"dnsmasq-dns-675f4bcbfc-cr8l7\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.154868 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d90eb2-2118-41b1-9394-a593a3c22cad-config\") pod \"dnsmasq-dns-675f4bcbfc-cr8l7\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.154897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.154942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-config\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.154977 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqhc\" (UniqueName: \"kubernetes.io/projected/aaf439d8-7929-455f-98bd-0114eecaaba5-kube-api-access-7wqhc\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.256013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wqhc\" (UniqueName: \"kubernetes.io/projected/aaf439d8-7929-455f-98bd-0114eecaaba5-kube-api-access-7wqhc\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.256101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8sp9\" (UniqueName: \"kubernetes.io/projected/43d90eb2-2118-41b1-9394-a593a3c22cad-kube-api-access-w8sp9\") pod \"dnsmasq-dns-675f4bcbfc-cr8l7\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.256162 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d90eb2-2118-41b1-9394-a593a3c22cad-config\") pod \"dnsmasq-dns-675f4bcbfc-cr8l7\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.256192 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.256247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-config\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.257354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-config\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.257968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d90eb2-2118-41b1-9394-a593a3c22cad-config\") pod \"dnsmasq-dns-675f4bcbfc-cr8l7\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.261028 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.284097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wqhc\" (UniqueName: \"kubernetes.io/projected/aaf439d8-7929-455f-98bd-0114eecaaba5-kube-api-access-7wqhc\") pod \"dnsmasq-dns-78dd6ddcc-5x424\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.287164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8sp9\" (UniqueName: \"kubernetes.io/projected/43d90eb2-2118-41b1-9394-a593a3c22cad-kube-api-access-w8sp9\") pod \"dnsmasq-dns-675f4bcbfc-cr8l7\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.318245 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.362252 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.573143 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cr8l7"] Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.879129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" event={"ID":"43d90eb2-2118-41b1-9394-a593a3c22cad","Type":"ContainerStarted","Data":"1e8bc004ae99dc051dc9cb3c214a99609cdba1ef9966e61ff3ad1497499ee1d9"} Mar 12 15:05:11 crc kubenswrapper[4832]: I0312 15:05:11.895072 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5x424"] Mar 12 15:05:11 crc kubenswrapper[4832]: W0312 15:05:11.904666 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaf439d8_7929_455f_98bd_0114eecaaba5.slice/crio-fc62c77250144c7d1e683e84f12d7c2bbec44fe3b57074bb0d057b27df531baa WatchSource:0}: Error finding container fc62c77250144c7d1e683e84f12d7c2bbec44fe3b57074bb0d057b27df531baa: Status 404 returned error can't find the container with id fc62c77250144c7d1e683e84f12d7c2bbec44fe3b57074bb0d057b27df531baa Mar 12 15:05:12 crc kubenswrapper[4832]: I0312 15:05:12.888608 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" event={"ID":"aaf439d8-7929-455f-98bd-0114eecaaba5","Type":"ContainerStarted","Data":"fc62c77250144c7d1e683e84f12d7c2bbec44fe3b57074bb0d057b27df531baa"} Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.709398 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cr8l7"] Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.729400 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-jj2rf"] Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.740467 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-jj2rf"] Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.740605 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.904106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.904147 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-config\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.904214 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zd5\" (UniqueName: \"kubernetes.io/projected/d14f3099-5a1c-410c-9692-a5b90076fcd6-kube-api-access-t7zd5\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.963800 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5x424"] Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.992294 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gxsg5"] Mar 12 15:05:13 crc kubenswrapper[4832]: I0312 15:05:13.993712 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.000970 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gxsg5"] Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.007610 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zd5\" (UniqueName: \"kubernetes.io/projected/d14f3099-5a1c-410c-9692-a5b90076fcd6-kube-api-access-t7zd5\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.007695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-config\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.007712 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.008829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.010813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-config\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.025776 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zd5\" (UniqueName: \"kubernetes.io/projected/d14f3099-5a1c-410c-9692-a5b90076fcd6-kube-api-access-t7zd5\") pod \"dnsmasq-dns-5ccc8479f9-jj2rf\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.065078 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.109486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-config\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.109854 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zqp5\" (UniqueName: \"kubernetes.io/projected/fd9336b7-c523-476b-b929-b750118653cd-kube-api-access-8zqp5\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.109923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.211323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zqp5\" (UniqueName: \"kubernetes.io/projected/fd9336b7-c523-476b-b929-b750118653cd-kube-api-access-8zqp5\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.211397 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.211495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-config\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.212715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-config\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.214423 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.232117 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zqp5\" (UniqueName: \"kubernetes.io/projected/fd9336b7-c523-476b-b929-b750118653cd-kube-api-access-8zqp5\") pod \"dnsmasq-dns-57d769cc4f-gxsg5\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.337945 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-jj2rf"] Mar 12 15:05:14 crc kubenswrapper[4832]: W0312 15:05:14.352773 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd14f3099_5a1c_410c_9692_a5b90076fcd6.slice/crio-cbf3e576a088ddc5bd0feb276d7351188ead6f26356df2b5872cf17e8fed2ced WatchSource:0}: Error finding container cbf3e576a088ddc5bd0feb276d7351188ead6f26356df2b5872cf17e8fed2ced: Status 404 returned error can't find the container with id cbf3e576a088ddc5bd0feb276d7351188ead6f26356df2b5872cf17e8fed2ced Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.375717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.816752 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gxsg5"] Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.860891 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.862281 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.870056 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.870272 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.870420 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.870696 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.870769 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.870812 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.872001 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-52mlh" Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.880686 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:05:14 crc kubenswrapper[4832]: I0312 15:05:14.909395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" event={"ID":"d14f3099-5a1c-410c-9692-a5b90076fcd6","Type":"ContainerStarted","Data":"cbf3e576a088ddc5bd0feb276d7351188ead6f26356df2b5872cf17e8fed2ced"} Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022316 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022566 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw276\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-kube-api-access-pw276\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.022786 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124702 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw276\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-kube-api-access-pw276\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124899 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124937 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.124958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.125028 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.125084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.126073 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.126533 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.127085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.125019 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.128325 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.128331 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.130140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.134392 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.137302 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.138832 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.141160 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.141273 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.141338 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.141930 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r92zl" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.142053 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.142233 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.145818 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.151129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw276\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-kube-api-access-pw276\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.154771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.156030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.157651 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.161909 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.187285 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226374 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-config-data\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226409 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226424 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226450 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226467 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226488 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmm6j\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-kube-api-access-lmm6j\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226613 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.226626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.330363 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-config-data\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331402 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331514 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmm6j\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-kube-api-access-lmm6j\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331629 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331737 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331807 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.331995 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-config-data\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.332142 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.332971 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.334010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.334633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.334889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.335082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.335357 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.337198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.337247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.358202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmm6j\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-kube-api-access-lmm6j\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.369701 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " pod="openstack/rabbitmq-server-0" Mar 12 15:05:15 crc kubenswrapper[4832]: I0312 15:05:15.530155 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.417184 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.418890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.426310 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.426522 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.426620 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mczwz" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.432674 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.443798 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.444001 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.548855 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/45c3252f-6cf6-49c3-b42b-f692310a0e91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.548918 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-config-data-default\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.548946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgstf\" (UniqueName: \"kubernetes.io/projected/45c3252f-6cf6-49c3-b42b-f692310a0e91-kube-api-access-tgstf\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.548974 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c3252f-6cf6-49c3-b42b-f692310a0e91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.549011 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.549039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.549100 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/45c3252f-6cf6-49c3-b42b-f692310a0e91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.549116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-kolla-config\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c3252f-6cf6-49c3-b42b-f692310a0e91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652801 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/45c3252f-6cf6-49c3-b42b-f692310a0e91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-kolla-config\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/45c3252f-6cf6-49c3-b42b-f692310a0e91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-config-data-default\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.652986 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgstf\" (UniqueName: \"kubernetes.io/projected/45c3252f-6cf6-49c3-b42b-f692310a0e91-kube-api-access-tgstf\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.653111 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.654211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/45c3252f-6cf6-49c3-b42b-f692310a0e91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.699711 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c3252f-6cf6-49c3-b42b-f692310a0e91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.699723 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-config-data-default\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.700470 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-kolla-config\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.700967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c3252f-6cf6-49c3-b42b-f692310a0e91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.703324 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/45c3252f-6cf6-49c3-b42b-f692310a0e91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.704550 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.706255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgstf\" (UniqueName: \"kubernetes.io/projected/45c3252f-6cf6-49c3-b42b-f692310a0e91-kube-api-access-tgstf\") pod \"openstack-galera-0\" (UID: \"45c3252f-6cf6-49c3-b42b-f692310a0e91\") " pod="openstack/openstack-galera-0" Mar 12 15:05:16 crc kubenswrapper[4832]: I0312 15:05:16.746075 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: W0312 15:05:17.516794 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd9336b7_c523_476b_b929_b750118653cd.slice/crio-63af1e8d4989936ae4176c0922cb293d8df3cd4f2ad5db09235719f535c47d2d WatchSource:0}: Error finding container 63af1e8d4989936ae4176c0922cb293d8df3cd4f2ad5db09235719f535c47d2d: Status 404 returned error can't find the container with id 63af1e8d4989936ae4176c0922cb293d8df3cd4f2ad5db09235719f535c47d2d Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.760669 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.762364 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.768565 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.768702 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.768816 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.769352 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.769471 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pc2vw" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.881552 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e4e284-b647-4f24-915d-b50315c0fb5e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.881817 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.881858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.881931 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.882005 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e4e284-b647-4f24-915d-b50315c0fb5e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.882068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86xd\" (UniqueName: \"kubernetes.io/projected/07e4e284-b647-4f24-915d-b50315c0fb5e-kube-api-access-x86xd\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.882117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07e4e284-b647-4f24-915d-b50315c0fb5e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.882175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.929827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" event={"ID":"fd9336b7-c523-476b-b929-b750118653cd","Type":"ContainerStarted","Data":"63af1e8d4989936ae4176c0922cb293d8df3cd4f2ad5db09235719f535c47d2d"} Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e4e284-b647-4f24-915d-b50315c0fb5e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86xd\" (UniqueName: \"kubernetes.io/projected/07e4e284-b647-4f24-915d-b50315c0fb5e-kube-api-access-x86xd\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983467 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07e4e284-b647-4f24-915d-b50315c0fb5e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983567 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e4e284-b647-4f24-915d-b50315c0fb5e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.983652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.984204 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.984539 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07e4e284-b647-4f24-915d-b50315c0fb5e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.984750 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.986578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.988036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07e4e284-b647-4f24-915d-b50315c0fb5e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:17 crc kubenswrapper[4832]: I0312 15:05:17.998569 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e4e284-b647-4f24-915d-b50315c0fb5e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.001165 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86xd\" (UniqueName: \"kubernetes.io/projected/07e4e284-b647-4f24-915d-b50315c0fb5e-kube-api-access-x86xd\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.004773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e4e284-b647-4f24-915d-b50315c0fb5e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.007399 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"07e4e284-b647-4f24-915d-b50315c0fb5e\") " pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.070490 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.071571 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.072981 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9x8lg" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.074817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.075255 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.085567 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.088666 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.186623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.186700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-kolla-config\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.186730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-config-data\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.186757 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.186852 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppbm\" (UniqueName: \"kubernetes.io/projected/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-kube-api-access-2ppbm\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.288806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.288872 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-kolla-config\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.288901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-config-data\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.289636 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-kolla-config\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.289872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-config-data\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.289929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.290043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ppbm\" (UniqueName: \"kubernetes.io/projected/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-kube-api-access-2ppbm\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.292357 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.295068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.308029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ppbm\" (UniqueName: \"kubernetes.io/projected/0f04c5e2-4eb0-4515-aa61-006f0b34ee93-kube-api-access-2ppbm\") pod \"memcached-0\" (UID: \"0f04c5e2-4eb0-4515-aa61-006f0b34ee93\") " pod="openstack/memcached-0" Mar 12 15:05:18 crc kubenswrapper[4832]: I0312 15:05:18.404936 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.158327 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.159853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.163073 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-t9frk" Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.184381 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.323557 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfkz2\" (UniqueName: \"kubernetes.io/projected/9f816534-91fd-42d6-8193-85a77ad3490c-kube-api-access-kfkz2\") pod \"kube-state-metrics-0\" (UID: \"9f816534-91fd-42d6-8193-85a77ad3490c\") " pod="openstack/kube-state-metrics-0" Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.428337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfkz2\" (UniqueName: \"kubernetes.io/projected/9f816534-91fd-42d6-8193-85a77ad3490c-kube-api-access-kfkz2\") pod \"kube-state-metrics-0\" (UID: \"9f816534-91fd-42d6-8193-85a77ad3490c\") " pod="openstack/kube-state-metrics-0" Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.448351 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfkz2\" (UniqueName: \"kubernetes.io/projected/9f816534-91fd-42d6-8193-85a77ad3490c-kube-api-access-kfkz2\") pod \"kube-state-metrics-0\" (UID: \"9f816534-91fd-42d6-8193-85a77ad3490c\") " pod="openstack/kube-state-metrics-0" Mar 12 15:05:20 crc kubenswrapper[4832]: I0312 15:05:20.488071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 15:05:21 crc kubenswrapper[4832]: I0312 15:05:21.160231 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.830854 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6x8t6"] Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.832043 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.847436 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6x8t6"] Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.847760 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-47m69" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.848159 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.848367 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.885638 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kmmfr"] Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.888872 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.894246 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kmmfr"] Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-run\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984235 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-ovn-controller-tls-certs\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984279 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-log-ovn\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984309 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-run\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984378 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-run-ovn\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-etc-ovs\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984438 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-scripts\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-combined-ca-bundle\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984472 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-log\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984492 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50077d7-6691-4664-89cc-3be14f2e8313-scripts\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984529 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-lib\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984554 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q292n\" (UniqueName: \"kubernetes.io/projected/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-kube-api-access-q292n\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:23 crc kubenswrapper[4832]: I0312 15:05:23.984585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8p6\" (UniqueName: \"kubernetes.io/projected/d50077d7-6691-4664-89cc-3be14f2e8313-kube-api-access-zw8p6\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.085585 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-scripts\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.085631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-combined-ca-bundle\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.085659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-log\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.085691 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50077d7-6691-4664-89cc-3be14f2e8313-scripts\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.085716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-lib\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.085853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q292n\" (UniqueName: \"kubernetes.io/projected/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-kube-api-access-q292n\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.085885 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw8p6\" (UniqueName: \"kubernetes.io/projected/d50077d7-6691-4664-89cc-3be14f2e8313-kube-api-access-zw8p6\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086527 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-run\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086565 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-ovn-controller-tls-certs\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-log-ovn\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086624 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-run\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086645 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-run-ovn\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-etc-ovs\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086864 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-etc-ovs\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086903 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-lib\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.086736 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-log\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.087104 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-run\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.087070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-log-ovn\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.087070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d50077d7-6691-4664-89cc-3be14f2e8313-var-run\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.088010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-var-run-ovn\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.089419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d50077d7-6691-4664-89cc-3be14f2e8313-scripts\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.090284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-scripts\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.096064 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-combined-ca-bundle\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.098868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-ovn-controller-tls-certs\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.102183 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw8p6\" (UniqueName: \"kubernetes.io/projected/d50077d7-6691-4664-89cc-3be14f2e8313-kube-api-access-zw8p6\") pod \"ovn-controller-ovs-kmmfr\" (UID: \"d50077d7-6691-4664-89cc-3be14f2e8313\") " pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.103073 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q292n\" (UniqueName: \"kubernetes.io/projected/9d4200a6-7cc2-4b4a-b01e-290567a2ec8c-kube-api-access-q292n\") pod \"ovn-controller-6x8t6\" (UID: \"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c\") " pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.151038 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.208511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.721750 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.723283 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.726406 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.726599 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.726913 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jwdwk" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.727587 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.727795 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.729190 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.796898 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d73c1039-b9bc-4861-87d3-22457aecb575-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.796991 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.797088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.797242 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d73c1039-b9bc-4861-87d3-22457aecb575-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.797369 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.797484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6g6c\" (UniqueName: \"kubernetes.io/projected/d73c1039-b9bc-4861-87d3-22457aecb575-kube-api-access-f6g6c\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.797598 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.797624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c1039-b9bc-4861-87d3-22457aecb575-config\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.898765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6g6c\" (UniqueName: \"kubernetes.io/projected/d73c1039-b9bc-4861-87d3-22457aecb575-kube-api-access-f6g6c\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.898825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.898852 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c1039-b9bc-4861-87d3-22457aecb575-config\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.898892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d73c1039-b9bc-4861-87d3-22457aecb575-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.898931 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.898953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.899077 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d73c1039-b9bc-4861-87d3-22457aecb575-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.899118 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.899148 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.899367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d73c1039-b9bc-4861-87d3-22457aecb575-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.899850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c1039-b9bc-4861-87d3-22457aecb575-config\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.900450 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d73c1039-b9bc-4861-87d3-22457aecb575-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.902710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.911670 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.913106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d73c1039-b9bc-4861-87d3-22457aecb575-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.914880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6g6c\" (UniqueName: \"kubernetes.io/projected/d73c1039-b9bc-4861-87d3-22457aecb575-kube-api-access-f6g6c\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:24 crc kubenswrapper[4832]: I0312 15:05:24.918277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d73c1039-b9bc-4861-87d3-22457aecb575\") " pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:25 crc kubenswrapper[4832]: I0312 15:05:25.047948 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:25 crc kubenswrapper[4832]: W0312 15:05:25.462795 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45c3252f_6cf6_49c3_b42b_f692310a0e91.slice/crio-51971ff8fa7b9b82202551a6b1b181e7f7e4bded56fbe805d3e51867c080e10c WatchSource:0}: Error finding container 51971ff8fa7b9b82202551a6b1b181e7f7e4bded56fbe805d3e51867c080e10c: Status 404 returned error can't find the container with id 51971ff8fa7b9b82202551a6b1b181e7f7e4bded56fbe805d3e51867c080e10c Mar 12 15:05:25 crc kubenswrapper[4832]: I0312 15:05:25.996368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"45c3252f-6cf6-49c3-b42b-f692310a0e91","Type":"ContainerStarted","Data":"51971ff8fa7b9b82202551a6b1b181e7f7e4bded56fbe805d3e51867c080e10c"} Mar 12 15:05:26 crc kubenswrapper[4832]: E0312 15:05:26.241065 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 15:05:26 crc kubenswrapper[4832]: E0312 15:05:26.241336 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 15:05:26 crc kubenswrapper[4832]: E0312 15:05:26.241599 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8sp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-cr8l7_openstack(43d90eb2-2118-41b1-9394-a593a3c22cad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:05:26 crc kubenswrapper[4832]: E0312 15:05:26.241728 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wqhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5x424_openstack(aaf439d8-7929-455f-98bd-0114eecaaba5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:05:26 crc kubenswrapper[4832]: E0312 15:05:26.242892 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" podUID="43d90eb2-2118-41b1-9394-a593a3c22cad" Mar 12 15:05:26 crc kubenswrapper[4832]: E0312 15:05:26.242914 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" podUID="aaf439d8-7929-455f-98bd-0114eecaaba5" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.400933 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.402056 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.404462 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.404486 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.407995 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.408340 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2gwnx" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.423785 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.521167 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.521986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.522085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.522119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f51f8d-71a9-4409-abd0-8981bced84a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.522165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wpbx\" (UniqueName: \"kubernetes.io/projected/60f51f8d-71a9-4409-abd0-8981bced84a2-kube-api-access-8wpbx\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.522190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60f51f8d-71a9-4409-abd0-8981bced84a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.522223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f51f8d-71a9-4409-abd0-8981bced84a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.522344 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.629844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.629896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f51f8d-71a9-4409-abd0-8981bced84a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.629929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wpbx\" (UniqueName: \"kubernetes.io/projected/60f51f8d-71a9-4409-abd0-8981bced84a2-kube-api-access-8wpbx\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.629952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60f51f8d-71a9-4409-abd0-8981bced84a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.629981 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f51f8d-71a9-4409-abd0-8981bced84a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.630022 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.630064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.630094 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.631400 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60f51f8d-71a9-4409-abd0-8981bced84a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.631857 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.631900 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f51f8d-71a9-4409-abd0-8981bced84a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.632888 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f51f8d-71a9-4409-abd0-8981bced84a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.637653 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.639392 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.649464 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f51f8d-71a9-4409-abd0-8981bced84a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.649925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wpbx\" (UniqueName: \"kubernetes.io/projected/60f51f8d-71a9-4409-abd0-8981bced84a2-kube-api-access-8wpbx\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.662990 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.663937 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"60f51f8d-71a9-4409-abd0-8981bced84a2\") " pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.767363 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 15:05:26 crc kubenswrapper[4832]: W0312 15:05:26.772840 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f04c5e2_4eb0_4515_aa61_006f0b34ee93.slice/crio-80f97be1384ce808233791d793a55a15fc2d8fb5833967aa23eb54b86e58525f WatchSource:0}: Error finding container 80f97be1384ce808233791d793a55a15fc2d8fb5833967aa23eb54b86e58525f: Status 404 returned error can't find the container with id 80f97be1384ce808233791d793a55a15fc2d8fb5833967aa23eb54b86e58525f Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.837950 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.918443 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 15:05:26 crc kubenswrapper[4832]: I0312 15:05:26.927828 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:05:26 crc kubenswrapper[4832]: W0312 15:05:26.932401 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e4e284_b647_4f24_915d_b50315c0fb5e.slice/crio-641af4e37a43ea41864db570848cc45dfb15a8547fc2b3987a4d73af6e4c86c9 WatchSource:0}: Error finding container 641af4e37a43ea41864db570848cc45dfb15a8547fc2b3987a4d73af6e4c86c9: Status 404 returned error can't find the container with id 641af4e37a43ea41864db570848cc45dfb15a8547fc2b3987a4d73af6e4c86c9 Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.022756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07e4e284-b647-4f24-915d-b50315c0fb5e","Type":"ContainerStarted","Data":"641af4e37a43ea41864db570848cc45dfb15a8547fc2b3987a4d73af6e4c86c9"} Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.023389 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6x8t6"] Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.025605 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fef23d2a-252b-4733-bb4e-e83d5de2f4f4","Type":"ContainerStarted","Data":"aa7e1a390303b63284f4cafc49be1f9e64a9713a984093bbd0099f1abed489ab"} Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.027403 4832 generic.go:334] "Generic (PLEG): container finished" podID="fd9336b7-c523-476b-b929-b750118653cd" containerID="8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da" exitCode=0 Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.027472 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" event={"ID":"fd9336b7-c523-476b-b929-b750118653cd","Type":"ContainerDied","Data":"8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da"} Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.028692 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"667d5405-474a-4ab3-bcbf-8fd5d1c179aa","Type":"ContainerStarted","Data":"756da955fede94e319835c2f5a18954bc71395a0426d68f26b064f8ce905f05b"} Mar 12 15:05:27 crc kubenswrapper[4832]: W0312 15:05:27.030347 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d4200a6_7cc2_4b4a_b01e_290567a2ec8c.slice/crio-b708d27edf37d32cca7c07062e6cf08acedc41b69a0b402c2c931480c10067a0 WatchSource:0}: Error finding container b708d27edf37d32cca7c07062e6cf08acedc41b69a0b402c2c931480c10067a0: Status 404 returned error can't find the container with id b708d27edf37d32cca7c07062e6cf08acedc41b69a0b402c2c931480c10067a0 Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.030422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0f04c5e2-4eb0-4515-aa61-006f0b34ee93","Type":"ContainerStarted","Data":"80f97be1384ce808233791d793a55a15fc2d8fb5833967aa23eb54b86e58525f"} Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.032342 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.034923 4832 generic.go:334] "Generic (PLEG): container finished" podID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerID="dc805beb0d78892cbdc5e1553c89f3c5d7ba1e8c2c76831f320ea31ac671f81d" exitCode=0 Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.035585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" event={"ID":"d14f3099-5a1c-410c-9692-a5b90076fcd6","Type":"ContainerDied","Data":"dc805beb0d78892cbdc5e1553c89f3c5d7ba1e8c2c76831f320ea31ac671f81d"} Mar 12 15:05:27 crc kubenswrapper[4832]: W0312 15:05:27.037173 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd73c1039_b9bc_4861_87d3_22457aecb575.slice/crio-dbacdb575aa428634742844c3528bcf9436fb239c0929a9e0d5096cacac8d974 WatchSource:0}: Error finding container dbacdb575aa428634742844c3528bcf9436fb239c0929a9e0d5096cacac8d974: Status 404 returned error can't find the container with id dbacdb575aa428634742844c3528bcf9436fb239c0929a9e0d5096cacac8d974 Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.051384 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.159440 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kmmfr"] Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.348060 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.437437 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.442143 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 15:05:27 crc kubenswrapper[4832]: W0312 15:05:27.452826 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60f51f8d_71a9_4409_abd0_8981bced84a2.slice/crio-2590375ce3794b2a7b9650d959a1d07df50089f48ef6730cdaa77071c8699ae9 WatchSource:0}: Error finding container 2590375ce3794b2a7b9650d959a1d07df50089f48ef6730cdaa77071c8699ae9: Status 404 returned error can't find the container with id 2590375ce3794b2a7b9650d959a1d07df50089f48ef6730cdaa77071c8699ae9 Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.455621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-config\") pod \"aaf439d8-7929-455f-98bd-0114eecaaba5\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.455715 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wqhc\" (UniqueName: \"kubernetes.io/projected/aaf439d8-7929-455f-98bd-0114eecaaba5-kube-api-access-7wqhc\") pod \"aaf439d8-7929-455f-98bd-0114eecaaba5\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.455971 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-dns-svc\") pod \"aaf439d8-7929-455f-98bd-0114eecaaba5\" (UID: \"aaf439d8-7929-455f-98bd-0114eecaaba5\") " Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.456185 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-config" (OuterVolumeSpecName: "config") pod "aaf439d8-7929-455f-98bd-0114eecaaba5" (UID: "aaf439d8-7929-455f-98bd-0114eecaaba5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.456423 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.456832 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aaf439d8-7929-455f-98bd-0114eecaaba5" (UID: "aaf439d8-7929-455f-98bd-0114eecaaba5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.467286 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf439d8-7929-455f-98bd-0114eecaaba5-kube-api-access-7wqhc" (OuterVolumeSpecName: "kube-api-access-7wqhc") pod "aaf439d8-7929-455f-98bd-0114eecaaba5" (UID: "aaf439d8-7929-455f-98bd-0114eecaaba5"). InnerVolumeSpecName "kube-api-access-7wqhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.557938 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8sp9\" (UniqueName: \"kubernetes.io/projected/43d90eb2-2118-41b1-9394-a593a3c22cad-kube-api-access-w8sp9\") pod \"43d90eb2-2118-41b1-9394-a593a3c22cad\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.558029 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d90eb2-2118-41b1-9394-a593a3c22cad-config\") pod \"43d90eb2-2118-41b1-9394-a593a3c22cad\" (UID: \"43d90eb2-2118-41b1-9394-a593a3c22cad\") " Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.558443 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf439d8-7929-455f-98bd-0114eecaaba5-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.558457 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wqhc\" (UniqueName: \"kubernetes.io/projected/aaf439d8-7929-455f-98bd-0114eecaaba5-kube-api-access-7wqhc\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.558625 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43d90eb2-2118-41b1-9394-a593a3c22cad-config" (OuterVolumeSpecName: "config") pod "43d90eb2-2118-41b1-9394-a593a3c22cad" (UID: "43d90eb2-2118-41b1-9394-a593a3c22cad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.561801 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d90eb2-2118-41b1-9394-a593a3c22cad-kube-api-access-w8sp9" (OuterVolumeSpecName: "kube-api-access-w8sp9") pod "43d90eb2-2118-41b1-9394-a593a3c22cad" (UID: "43d90eb2-2118-41b1-9394-a593a3c22cad"). InnerVolumeSpecName "kube-api-access-w8sp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.660588 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8sp9\" (UniqueName: \"kubernetes.io/projected/43d90eb2-2118-41b1-9394-a593a3c22cad-kube-api-access-w8sp9\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:27 crc kubenswrapper[4832]: I0312 15:05:27.660637 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d90eb2-2118-41b1-9394-a593a3c22cad-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.044813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" event={"ID":"d14f3099-5a1c-410c-9692-a5b90076fcd6","Type":"ContainerStarted","Data":"853d8fb6ee33ccd2609d90e1806ce235b18c9e17c7079b70d4a673acbd373a76"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.044880 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.046779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"60f51f8d-71a9-4409-abd0-8981bced84a2","Type":"ContainerStarted","Data":"2590375ce3794b2a7b9650d959a1d07df50089f48ef6730cdaa77071c8699ae9"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.049435 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" event={"ID":"fd9336b7-c523-476b-b929-b750118653cd","Type":"ContainerStarted","Data":"c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.049570 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.050837 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.050836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cr8l7" event={"ID":"43d90eb2-2118-41b1-9394-a593a3c22cad","Type":"ContainerDied","Data":"1e8bc004ae99dc051dc9cb3c214a99609cdba1ef9966e61ff3ad1497499ee1d9"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.053233 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6x8t6" event={"ID":"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c","Type":"ContainerStarted","Data":"b708d27edf37d32cca7c07062e6cf08acedc41b69a0b402c2c931480c10067a0"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.054948 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d73c1039-b9bc-4861-87d3-22457aecb575","Type":"ContainerStarted","Data":"dbacdb575aa428634742844c3528bcf9436fb239c0929a9e0d5096cacac8d974"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.055816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kmmfr" event={"ID":"d50077d7-6691-4664-89cc-3be14f2e8313","Type":"ContainerStarted","Data":"19c73c24d1a97958272a045a431c65ab8ef18711fa42343c97230e4a993ea5df"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.056619 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" event={"ID":"aaf439d8-7929-455f-98bd-0114eecaaba5","Type":"ContainerDied","Data":"fc62c77250144c7d1e683e84f12d7c2bbec44fe3b57074bb0d057b27df531baa"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.056677 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5x424" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.061955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f816534-91fd-42d6-8193-85a77ad3490c","Type":"ContainerStarted","Data":"7d00d2b604032c4b7ca45f8ebc1091f95ddf1b418cc591e74a5a12226d1727ff"} Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.075841 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" podStartSLOduration=3.090464106 podStartE2EDuration="15.064264083s" podCreationTimestamp="2026-03-12 15:05:13 +0000 UTC" firstStartedPulling="2026-03-12 15:05:14.355345059 +0000 UTC m=+1072.999359285" lastFinishedPulling="2026-03-12 15:05:26.329145016 +0000 UTC m=+1084.973159262" observedRunningTime="2026-03-12 15:05:28.059465214 +0000 UTC m=+1086.703479440" watchObservedRunningTime="2026-03-12 15:05:28.064264083 +0000 UTC m=+1086.708278309" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.094564 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" podStartSLOduration=6.289396136 podStartE2EDuration="15.0945345s" podCreationTimestamp="2026-03-12 15:05:13 +0000 UTC" firstStartedPulling="2026-03-12 15:05:17.533769745 +0000 UTC m=+1076.177783981" lastFinishedPulling="2026-03-12 15:05:26.338908119 +0000 UTC m=+1084.982922345" observedRunningTime="2026-03-12 15:05:28.078198507 +0000 UTC m=+1086.722212733" watchObservedRunningTime="2026-03-12 15:05:28.0945345 +0000 UTC m=+1086.738548726" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.131642 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cr8l7"] Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.131692 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cr8l7"] Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.190907 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5x424"] Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.198253 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5x424"] Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.628768 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d90eb2-2118-41b1-9394-a593a3c22cad" path="/var/lib/kubelet/pods/43d90eb2-2118-41b1-9394-a593a3c22cad/volumes" Mar 12 15:05:28 crc kubenswrapper[4832]: I0312 15:05:28.629533 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf439d8-7929-455f-98bd-0114eecaaba5" path="/var/lib/kubelet/pods/aaf439d8-7929-455f-98bd-0114eecaaba5/volumes" Mar 12 15:05:34 crc kubenswrapper[4832]: I0312 15:05:34.068196 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:34 crc kubenswrapper[4832]: I0312 15:05:34.376843 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:05:34 crc kubenswrapper[4832]: I0312 15:05:34.441063 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-jj2rf"] Mar 12 15:05:34 crc kubenswrapper[4832]: I0312 15:05:34.441249 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" podUID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerName="dnsmasq-dns" containerID="cri-o://853d8fb6ee33ccd2609d90e1806ce235b18c9e17c7079b70d4a673acbd373a76" gracePeriod=10 Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.120176 4832 generic.go:334] "Generic (PLEG): container finished" podID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerID="853d8fb6ee33ccd2609d90e1806ce235b18c9e17c7079b70d4a673acbd373a76" exitCode=0 Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.120436 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" event={"ID":"d14f3099-5a1c-410c-9692-a5b90076fcd6","Type":"ContainerDied","Data":"853d8fb6ee33ccd2609d90e1806ce235b18c9e17c7079b70d4a673acbd373a76"} Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.561657 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.630972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7zd5\" (UniqueName: \"kubernetes.io/projected/d14f3099-5a1c-410c-9692-a5b90076fcd6-kube-api-access-t7zd5\") pod \"d14f3099-5a1c-410c-9692-a5b90076fcd6\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.631105 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-dns-svc\") pod \"d14f3099-5a1c-410c-9692-a5b90076fcd6\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.631135 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-config\") pod \"d14f3099-5a1c-410c-9692-a5b90076fcd6\" (UID: \"d14f3099-5a1c-410c-9692-a5b90076fcd6\") " Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.645267 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14f3099-5a1c-410c-9692-a5b90076fcd6-kube-api-access-t7zd5" (OuterVolumeSpecName: "kube-api-access-t7zd5") pod "d14f3099-5a1c-410c-9692-a5b90076fcd6" (UID: "d14f3099-5a1c-410c-9692-a5b90076fcd6"). InnerVolumeSpecName "kube-api-access-t7zd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.679486 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-config" (OuterVolumeSpecName: "config") pod "d14f3099-5a1c-410c-9692-a5b90076fcd6" (UID: "d14f3099-5a1c-410c-9692-a5b90076fcd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.682346 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d14f3099-5a1c-410c-9692-a5b90076fcd6" (UID: "d14f3099-5a1c-410c-9692-a5b90076fcd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.732650 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7zd5\" (UniqueName: \"kubernetes.io/projected/d14f3099-5a1c-410c-9692-a5b90076fcd6-kube-api-access-t7zd5\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.732781 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:35 crc kubenswrapper[4832]: I0312 15:05:35.733625 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f3099-5a1c-410c-9692-a5b90076fcd6-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.132146 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"45c3252f-6cf6-49c3-b42b-f692310a0e91","Type":"ContainerStarted","Data":"8574d7e5814897f4c997e4b1312578a9490b50717469bad0f2a9b8b60936eb7b"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.136061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07e4e284-b647-4f24-915d-b50315c0fb5e","Type":"ContainerStarted","Data":"15c8bf81ea9573bffaf88e9a5b4bea188749fbc4bb1fdf7ee031534b54dae382"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.139381 4832 generic.go:334] "Generic (PLEG): container finished" podID="d50077d7-6691-4664-89cc-3be14f2e8313" containerID="a8de2328180edadabaf07d351315b23470089ef42051e80839452c28be35ce7c" exitCode=0 Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.139475 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kmmfr" event={"ID":"d50077d7-6691-4664-89cc-3be14f2e8313","Type":"ContainerDied","Data":"a8de2328180edadabaf07d351315b23470089ef42051e80839452c28be35ce7c"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.143162 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" event={"ID":"d14f3099-5a1c-410c-9692-a5b90076fcd6","Type":"ContainerDied","Data":"cbf3e576a088ddc5bd0feb276d7351188ead6f26356df2b5872cf17e8fed2ced"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.143202 4832 scope.go:117] "RemoveContainer" containerID="853d8fb6ee33ccd2609d90e1806ce235b18c9e17c7079b70d4a673acbd373a76" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.143312 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-jj2rf" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.148368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0f04c5e2-4eb0-4515-aa61-006f0b34ee93","Type":"ContainerStarted","Data":"353bec605b169a2e050e15ac4a7fe6ace0f82b0b35d352a8f99ab9d824d2618e"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.148667 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.157388 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f816534-91fd-42d6-8193-85a77ad3490c","Type":"ContainerStarted","Data":"67aea7477ebc6236ef618acd800acf46003a2be553aa9445c5a4bf771ea4982c"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.159288 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.163185 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d73c1039-b9bc-4861-87d3-22457aecb575","Type":"ContainerStarted","Data":"a918988f8a8535fd3c5b70acdc0958201e079285f05e0caedd1f47a246558deb"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.165868 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"60f51f8d-71a9-4409-abd0-8981bced84a2","Type":"ContainerStarted","Data":"2e5fcaaa90667211e33999af1ac8363e585f02ce274a5c52b52df79c94cdb791"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.167649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6x8t6" event={"ID":"9d4200a6-7cc2-4b4a-b01e-290567a2ec8c","Type":"ContainerStarted","Data":"78cfe0b40179f2f0209a97e68713838a422b31e64e3a85da90fe38c7ceba1544"} Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.168232 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6x8t6" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.243708 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=10.871788992 podStartE2EDuration="18.243690229s" podCreationTimestamp="2026-03-12 15:05:18 +0000 UTC" firstStartedPulling="2026-03-12 15:05:26.777026738 +0000 UTC m=+1085.421040954" lastFinishedPulling="2026-03-12 15:05:34.148927965 +0000 UTC m=+1092.792942191" observedRunningTime="2026-03-12 15:05:36.235952485 +0000 UTC m=+1094.879966721" watchObservedRunningTime="2026-03-12 15:05:36.243690229 +0000 UTC m=+1094.887704455" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.244626 4832 scope.go:117] "RemoveContainer" containerID="dc805beb0d78892cbdc5e1553c89f3c5d7ba1e8c2c76831f320ea31ac671f81d" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.277429 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-jj2rf"] Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.291411 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-jj2rf"] Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.292922 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.1507371 podStartE2EDuration="16.292903306s" podCreationTimestamp="2026-03-12 15:05:20 +0000 UTC" firstStartedPulling="2026-03-12 15:05:27.099792935 +0000 UTC m=+1085.743807161" lastFinishedPulling="2026-03-12 15:05:35.241959141 +0000 UTC m=+1093.885973367" observedRunningTime="2026-03-12 15:05:36.26994997 +0000 UTC m=+1094.913964216" watchObservedRunningTime="2026-03-12 15:05:36.292903306 +0000 UTC m=+1094.936917532" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.299348 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6x8t6" podStartSLOduration=5.85137188 podStartE2EDuration="13.299337282s" podCreationTimestamp="2026-03-12 15:05:23 +0000 UTC" firstStartedPulling="2026-03-12 15:05:27.035744018 +0000 UTC m=+1085.679758244" lastFinishedPulling="2026-03-12 15:05:34.48370942 +0000 UTC m=+1093.127723646" observedRunningTime="2026-03-12 15:05:36.287835929 +0000 UTC m=+1094.931850165" watchObservedRunningTime="2026-03-12 15:05:36.299337282 +0000 UTC m=+1094.943351508" Mar 12 15:05:36 crc kubenswrapper[4832]: I0312 15:05:36.635951 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d14f3099-5a1c-410c-9692-a5b90076fcd6" path="/var/lib/kubelet/pods/d14f3099-5a1c-410c-9692-a5b90076fcd6/volumes" Mar 12 15:05:37 crc kubenswrapper[4832]: I0312 15:05:37.175886 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"667d5405-474a-4ab3-bcbf-8fd5d1c179aa","Type":"ContainerStarted","Data":"c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88"} Mar 12 15:05:37 crc kubenswrapper[4832]: I0312 15:05:37.180916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fef23d2a-252b-4733-bb4e-e83d5de2f4f4","Type":"ContainerStarted","Data":"93b6dcd69cc1d58f2baf9259c24e904358b6c3d70cf097eb0116925f4f7421f6"} Mar 12 15:05:37 crc kubenswrapper[4832]: I0312 15:05:37.183199 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kmmfr" event={"ID":"d50077d7-6691-4664-89cc-3be14f2e8313","Type":"ContainerStarted","Data":"f028d7094bc21f1ec69fc2b8ad5072670f099f198066e837ef9e2c582993f7cd"} Mar 12 15:05:37 crc kubenswrapper[4832]: I0312 15:05:37.183274 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kmmfr" event={"ID":"d50077d7-6691-4664-89cc-3be14f2e8313","Type":"ContainerStarted","Data":"f60fd5bf2f49da0a5b185045f7b1ac1cf8604f6835071dec7d51e126b2439312"} Mar 12 15:05:37 crc kubenswrapper[4832]: I0312 15:05:37.266475 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kmmfr" podStartSLOduration=7.148582723 podStartE2EDuration="14.266451026s" podCreationTimestamp="2026-03-12 15:05:23 +0000 UTC" firstStartedPulling="2026-03-12 15:05:27.184389737 +0000 UTC m=+1085.828403963" lastFinishedPulling="2026-03-12 15:05:34.30225804 +0000 UTC m=+1092.946272266" observedRunningTime="2026-03-12 15:05:37.256831897 +0000 UTC m=+1095.900846143" watchObservedRunningTime="2026-03-12 15:05:37.266451026 +0000 UTC m=+1095.910465242" Mar 12 15:05:38 crc kubenswrapper[4832]: I0312 15:05:38.191498 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:38 crc kubenswrapper[4832]: I0312 15:05:38.191566 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:05:39 crc kubenswrapper[4832]: I0312 15:05:39.204203 4832 generic.go:334] "Generic (PLEG): container finished" podID="45c3252f-6cf6-49c3-b42b-f692310a0e91" containerID="8574d7e5814897f4c997e4b1312578a9490b50717469bad0f2a9b8b60936eb7b" exitCode=0 Mar 12 15:05:39 crc kubenswrapper[4832]: I0312 15:05:39.204303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"45c3252f-6cf6-49c3-b42b-f692310a0e91","Type":"ContainerDied","Data":"8574d7e5814897f4c997e4b1312578a9490b50717469bad0f2a9b8b60936eb7b"} Mar 12 15:05:39 crc kubenswrapper[4832]: I0312 15:05:39.208703 4832 generic.go:334] "Generic (PLEG): container finished" podID="07e4e284-b647-4f24-915d-b50315c0fb5e" containerID="15c8bf81ea9573bffaf88e9a5b4bea188749fbc4bb1fdf7ee031534b54dae382" exitCode=0 Mar 12 15:05:39 crc kubenswrapper[4832]: I0312 15:05:39.208796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07e4e284-b647-4f24-915d-b50315c0fb5e","Type":"ContainerDied","Data":"15c8bf81ea9573bffaf88e9a5b4bea188749fbc4bb1fdf7ee031534b54dae382"} Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.219581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"45c3252f-6cf6-49c3-b42b-f692310a0e91","Type":"ContainerStarted","Data":"72e859a731483f4f69aec185216bef4130adf3d859d76b37c9cf2c1162a72f45"} Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.221417 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07e4e284-b647-4f24-915d-b50315c0fb5e","Type":"ContainerStarted","Data":"b4c114e4a4e97a33641cbcb6a417099b15a87e407aeb2b3160340dbcf61bd907"} Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.223849 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d73c1039-b9bc-4861-87d3-22457aecb575","Type":"ContainerStarted","Data":"a9f0e8a55884700b5bcb4d029bcf8f889332603ff41e539a3f7c5ffcae8524da"} Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.225851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"60f51f8d-71a9-4409-abd0-8981bced84a2","Type":"ContainerStarted","Data":"885490bc205f36db8b0abc792eead2c44e25556ca5f73974606820303091c298"} Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.250485 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.412708668 podStartE2EDuration="25.250460717s" podCreationTimestamp="2026-03-12 15:05:15 +0000 UTC" firstStartedPulling="2026-03-12 15:05:25.464557503 +0000 UTC m=+1084.108571729" lastFinishedPulling="2026-03-12 15:05:34.302309542 +0000 UTC m=+1092.946323778" observedRunningTime="2026-03-12 15:05:40.244246157 +0000 UTC m=+1098.888260393" watchObservedRunningTime="2026-03-12 15:05:40.250460717 +0000 UTC m=+1098.894474953" Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.282097 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.201373858 podStartE2EDuration="17.282071104s" podCreationTimestamp="2026-03-12 15:05:23 +0000 UTC" firstStartedPulling="2026-03-12 15:05:27.039828236 +0000 UTC m=+1085.683842462" lastFinishedPulling="2026-03-12 15:05:39.120525482 +0000 UTC m=+1097.764539708" observedRunningTime="2026-03-12 15:05:40.268566882 +0000 UTC m=+1098.912581148" watchObservedRunningTime="2026-03-12 15:05:40.282071104 +0000 UTC m=+1098.926085370" Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.290823 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.139748303 podStartE2EDuration="24.290802057s" podCreationTimestamp="2026-03-12 15:05:16 +0000 UTC" firstStartedPulling="2026-03-12 15:05:26.938095007 +0000 UTC m=+1085.582109233" lastFinishedPulling="2026-03-12 15:05:35.089148761 +0000 UTC m=+1093.733162987" observedRunningTime="2026-03-12 15:05:40.288047157 +0000 UTC m=+1098.932061383" watchObservedRunningTime="2026-03-12 15:05:40.290802057 +0000 UTC m=+1098.934816283" Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.320054 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.670482696 podStartE2EDuration="15.320026094s" podCreationTimestamp="2026-03-12 15:05:25 +0000 UTC" firstStartedPulling="2026-03-12 15:05:27.458878694 +0000 UTC m=+1086.102892920" lastFinishedPulling="2026-03-12 15:05:39.108422092 +0000 UTC m=+1097.752436318" observedRunningTime="2026-03-12 15:05:40.312325731 +0000 UTC m=+1098.956339967" watchObservedRunningTime="2026-03-12 15:05:40.320026094 +0000 UTC m=+1098.964040370" Mar 12 15:05:40 crc kubenswrapper[4832]: I0312 15:05:40.496752 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 15:05:41 crc kubenswrapper[4832]: I0312 15:05:41.839592 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:41 crc kubenswrapper[4832]: I0312 15:05:41.839882 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:41 crc kubenswrapper[4832]: I0312 15:05:41.909763 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.276299 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.510775 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z29mt"] Mar 12 15:05:42 crc kubenswrapper[4832]: E0312 15:05:42.511079 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerName="dnsmasq-dns" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.511095 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerName="dnsmasq-dns" Mar 12 15:05:42 crc kubenswrapper[4832]: E0312 15:05:42.511107 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerName="init" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.511114 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerName="init" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.511266 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d14f3099-5a1c-410c-9692-a5b90076fcd6" containerName="dnsmasq-dns" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.512093 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.514127 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.522466 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z29mt"] Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.558126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-config\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.558221 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.558294 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbm7\" (UniqueName: \"kubernetes.io/projected/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-kube-api-access-ntbm7\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.558360 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.610534 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-455ln"] Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.611413 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.613216 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.647814 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-455ln"] Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663586 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56rs\" (UniqueName: \"kubernetes.io/projected/2ca5bacc-89cf-4734-a055-a1725ccd05e5-kube-api-access-c56rs\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbm7\" (UniqueName: \"kubernetes.io/projected/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-kube-api-access-ntbm7\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca5bacc-89cf-4734-a055-a1725ccd05e5-combined-ca-bundle\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-config\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2ca5bacc-89cf-4734-a055-a1725ccd05e5-ovs-rundir\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663817 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2ca5bacc-89cf-4734-a055-a1725ccd05e5-ovn-rundir\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663855 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca5bacc-89cf-4734-a055-a1725ccd05e5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663885 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.663914 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5bacc-89cf-4734-a055-a1725ccd05e5-config\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.664946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.664970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-config\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.665639 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.682804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbm7\" (UniqueName: \"kubernetes.io/projected/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-kube-api-access-ntbm7\") pod \"dnsmasq-dns-5bf47b49b7-z29mt\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.765248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca5bacc-89cf-4734-a055-a1725ccd05e5-combined-ca-bundle\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.765436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2ca5bacc-89cf-4734-a055-a1725ccd05e5-ovs-rundir\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.765477 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2ca5bacc-89cf-4734-a055-a1725ccd05e5-ovn-rundir\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.765553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca5bacc-89cf-4734-a055-a1725ccd05e5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.765592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5bacc-89cf-4734-a055-a1725ccd05e5-config\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.765638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c56rs\" (UniqueName: \"kubernetes.io/projected/2ca5bacc-89cf-4734-a055-a1725ccd05e5-kube-api-access-c56rs\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.766172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2ca5bacc-89cf-4734-a055-a1725ccd05e5-ovs-rundir\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.766589 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5bacc-89cf-4734-a055-a1725ccd05e5-config\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.767439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2ca5bacc-89cf-4734-a055-a1725ccd05e5-ovn-rundir\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.769322 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca5bacc-89cf-4734-a055-a1725ccd05e5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.770617 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca5bacc-89cf-4734-a055-a1725ccd05e5-combined-ca-bundle\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.779996 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c56rs\" (UniqueName: \"kubernetes.io/projected/2ca5bacc-89cf-4734-a055-a1725ccd05e5-kube-api-access-c56rs\") pod \"ovn-controller-metrics-455ln\" (UID: \"2ca5bacc-89cf-4734-a055-a1725ccd05e5\") " pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.829002 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.938036 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-455ln" Mar 12 15:05:42 crc kubenswrapper[4832]: I0312 15:05:42.963897 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z29mt"] Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.003607 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-lg46l"] Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.007884 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.011325 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.028746 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lg46l"] Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.051778 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.070969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.071024 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf5h5\" (UniqueName: \"kubernetes.io/projected/2861e1f8-fb7a-4fca-8180-d0c561241aa6-kube-api-access-xf5h5\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.071068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.071103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-dns-svc\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.071128 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-config\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.092536 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.172216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf5h5\" (UniqueName: \"kubernetes.io/projected/2861e1f8-fb7a-4fca-8180-d0c561241aa6-kube-api-access-xf5h5\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.172289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.172337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-dns-svc\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.172358 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-config\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.172436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.173233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.174455 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.176737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-config\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.176868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-dns-svc\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.191116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf5h5\" (UniqueName: \"kubernetes.io/projected/2861e1f8-fb7a-4fca-8180-d0c561241aa6-kube-api-access-xf5h5\") pod \"dnsmasq-dns-8554648995-lg46l\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.248528 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.282125 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.339661 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.382894 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z29mt"] Mar 12 15:05:43 crc kubenswrapper[4832]: W0312 15:05:43.402434 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod994d05f0_3e65_425a_8bd9_a2b8c980f9ac.slice/crio-c881290e1095bd7c19cd324398decaa680a7e16051ac228a696696e5837fb09a WatchSource:0}: Error finding container c881290e1095bd7c19cd324398decaa680a7e16051ac228a696696e5837fb09a: Status 404 returned error can't find the container with id c881290e1095bd7c19cd324398decaa680a7e16051ac228a696696e5837fb09a Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.406727 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.444619 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-455ln"] Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.556689 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.560163 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.564719 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.564845 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.564893 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9zrw9" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.564960 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.578078 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.681691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr75j\" (UniqueName: \"kubernetes.io/projected/07b52275-cab8-4095-ac58-8842d81e39fd-kube-api-access-hr75j\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.681733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b52275-cab8-4095-ac58-8842d81e39fd-config\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.681758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.681794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.681859 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07b52275-cab8-4095-ac58-8842d81e39fd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.681904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b52275-cab8-4095-ac58-8842d81e39fd-scripts\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.681955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.783462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr75j\" (UniqueName: \"kubernetes.io/projected/07b52275-cab8-4095-ac58-8842d81e39fd-kube-api-access-hr75j\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.785984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b52275-cab8-4095-ac58-8842d81e39fd-config\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.786012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.786071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.786169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07b52275-cab8-4095-ac58-8842d81e39fd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.786216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b52275-cab8-4095-ac58-8842d81e39fd-scripts\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.786252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.787651 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07b52275-cab8-4095-ac58-8842d81e39fd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.788246 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b52275-cab8-4095-ac58-8842d81e39fd-config\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.788995 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b52275-cab8-4095-ac58-8842d81e39fd-scripts\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.791779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.792882 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.795403 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b52275-cab8-4095-ac58-8842d81e39fd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.802583 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr75j\" (UniqueName: \"kubernetes.io/projected/07b52275-cab8-4095-ac58-8842d81e39fd-kube-api-access-hr75j\") pod \"ovn-northd-0\" (UID: \"07b52275-cab8-4095-ac58-8842d81e39fd\") " pod="openstack/ovn-northd-0" Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.834398 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lg46l"] Mar 12 15:05:43 crc kubenswrapper[4832]: W0312 15:05:43.837935 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2861e1f8_fb7a_4fca_8180_d0c561241aa6.slice/crio-06ad8a4df55d79059aad62d4e9da51edda5b54a2b994168b3c0ed7a569fd6ad4 WatchSource:0}: Error finding container 06ad8a4df55d79059aad62d4e9da51edda5b54a2b994168b3c0ed7a569fd6ad4: Status 404 returned error can't find the container with id 06ad8a4df55d79059aad62d4e9da51edda5b54a2b994168b3c0ed7a569fd6ad4 Mar 12 15:05:43 crc kubenswrapper[4832]: I0312 15:05:43.891232 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.260260 4832 generic.go:334] "Generic (PLEG): container finished" podID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerID="696a923cf37434e9108ef8cba4521c289308126a05a849e7b566b914adae7ec6" exitCode=0 Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.260324 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lg46l" event={"ID":"2861e1f8-fb7a-4fca-8180-d0c561241aa6","Type":"ContainerDied","Data":"696a923cf37434e9108ef8cba4521c289308126a05a849e7b566b914adae7ec6"} Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.260906 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lg46l" event={"ID":"2861e1f8-fb7a-4fca-8180-d0c561241aa6","Type":"ContainerStarted","Data":"06ad8a4df55d79059aad62d4e9da51edda5b54a2b994168b3c0ed7a569fd6ad4"} Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.264217 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-455ln" event={"ID":"2ca5bacc-89cf-4734-a055-a1725ccd05e5","Type":"ContainerStarted","Data":"1f2052016c28e3d866fec33505f4f87e0f7504b06c2b3acc5afc9e4da3f82b14"} Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.264248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-455ln" event={"ID":"2ca5bacc-89cf-4734-a055-a1725ccd05e5","Type":"ContainerStarted","Data":"c6379ea8523a50fb5b255d726cd46b579b373627a2481522831a3bf33e03a2f6"} Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.266183 4832 generic.go:334] "Generic (PLEG): container finished" podID="994d05f0-3e65-425a-8bd9-a2b8c980f9ac" containerID="190aaefbeb698c93cda9c8ec00e1824f4f8562124be16c79c092d5c7d41ae2a7" exitCode=0 Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.266350 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" event={"ID":"994d05f0-3e65-425a-8bd9-a2b8c980f9ac","Type":"ContainerDied","Data":"190aaefbeb698c93cda9c8ec00e1824f4f8562124be16c79c092d5c7d41ae2a7"} Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.266375 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" event={"ID":"994d05f0-3e65-425a-8bd9-a2b8c980f9ac","Type":"ContainerStarted","Data":"c881290e1095bd7c19cd324398decaa680a7e16051ac228a696696e5837fb09a"} Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.334825 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-455ln" podStartSLOduration=2.334806711 podStartE2EDuration="2.334806711s" podCreationTimestamp="2026-03-12 15:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:05:44.331538686 +0000 UTC m=+1102.975552922" watchObservedRunningTime="2026-03-12 15:05:44.334806711 +0000 UTC m=+1102.978820937" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.382981 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 15:05:44 crc kubenswrapper[4832]: W0312 15:05:44.413205 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07b52275_cab8_4095_ac58_8842d81e39fd.slice/crio-5e6eb8b6230ce60052466aa27af78451351cb218d321fb90566c2692e07ca0a8 WatchSource:0}: Error finding container 5e6eb8b6230ce60052466aa27af78451351cb218d321fb90566c2692e07ca0a8: Status 404 returned error can't find the container with id 5e6eb8b6230ce60052466aa27af78451351cb218d321fb90566c2692e07ca0a8 Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.590310 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.702786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-ovsdbserver-nb\") pod \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.702869 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-config\") pod \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.702901 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbm7\" (UniqueName: \"kubernetes.io/projected/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-kube-api-access-ntbm7\") pod \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.702923 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-dns-svc\") pod \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\" (UID: \"994d05f0-3e65-425a-8bd9-a2b8c980f9ac\") " Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.708721 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-kube-api-access-ntbm7" (OuterVolumeSpecName: "kube-api-access-ntbm7") pod "994d05f0-3e65-425a-8bd9-a2b8c980f9ac" (UID: "994d05f0-3e65-425a-8bd9-a2b8c980f9ac"). InnerVolumeSpecName "kube-api-access-ntbm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.721195 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "994d05f0-3e65-425a-8bd9-a2b8c980f9ac" (UID: "994d05f0-3e65-425a-8bd9-a2b8c980f9ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.728347 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-config" (OuterVolumeSpecName: "config") pod "994d05f0-3e65-425a-8bd9-a2b8c980f9ac" (UID: "994d05f0-3e65-425a-8bd9-a2b8c980f9ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.734118 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "994d05f0-3e65-425a-8bd9-a2b8c980f9ac" (UID: "994d05f0-3e65-425a-8bd9-a2b8c980f9ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.805205 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.805261 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.805287 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbm7\" (UniqueName: \"kubernetes.io/projected/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-kube-api-access-ntbm7\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:44 crc kubenswrapper[4832]: I0312 15:05:44.805307 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994d05f0-3e65-425a-8bd9-a2b8c980f9ac-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.279108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lg46l" event={"ID":"2861e1f8-fb7a-4fca-8180-d0c561241aa6","Type":"ContainerStarted","Data":"f8ab975f49e1daa1b7b54f67d387974ef6d60fc65ee8a9102a751775ec4a3764"} Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.281124 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.282877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"07b52275-cab8-4095-ac58-8842d81e39fd","Type":"ContainerStarted","Data":"5e6eb8b6230ce60052466aa27af78451351cb218d321fb90566c2692e07ca0a8"} Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.284140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" event={"ID":"994d05f0-3e65-425a-8bd9-a2b8c980f9ac","Type":"ContainerDied","Data":"c881290e1095bd7c19cd324398decaa680a7e16051ac228a696696e5837fb09a"} Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.284210 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z29mt" Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.284247 4832 scope.go:117] "RemoveContainer" containerID="190aaefbeb698c93cda9c8ec00e1824f4f8562124be16c79c092d5c7d41ae2a7" Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.307354 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-lg46l" podStartSLOduration=3.307329893 podStartE2EDuration="3.307329893s" podCreationTimestamp="2026-03-12 15:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:05:45.297346773 +0000 UTC m=+1103.941361009" watchObservedRunningTime="2026-03-12 15:05:45.307329893 +0000 UTC m=+1103.951344149" Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.376684 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z29mt"] Mar 12 15:05:45 crc kubenswrapper[4832]: I0312 15:05:45.380162 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z29mt"] Mar 12 15:05:46 crc kubenswrapper[4832]: I0312 15:05:46.292157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"07b52275-cab8-4095-ac58-8842d81e39fd","Type":"ContainerStarted","Data":"c0128596eece3768894b0353b0711482804fbf8d3d2edad7590eae758200388a"} Mar 12 15:05:46 crc kubenswrapper[4832]: I0312 15:05:46.292777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"07b52275-cab8-4095-ac58-8842d81e39fd","Type":"ContainerStarted","Data":"009cf972054de89c4440a84e7387f68788a0c8da1992c452ebcdf9fdfd7ea654"} Mar 12 15:05:46 crc kubenswrapper[4832]: I0312 15:05:46.318363 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.135547559 podStartE2EDuration="3.31833065s" podCreationTimestamp="2026-03-12 15:05:43 +0000 UTC" firstStartedPulling="2026-03-12 15:05:44.416249813 +0000 UTC m=+1103.060264039" lastFinishedPulling="2026-03-12 15:05:45.599032904 +0000 UTC m=+1104.243047130" observedRunningTime="2026-03-12 15:05:46.316354833 +0000 UTC m=+1104.960369069" watchObservedRunningTime="2026-03-12 15:05:46.31833065 +0000 UTC m=+1104.962344916" Mar 12 15:05:46 crc kubenswrapper[4832]: I0312 15:05:46.635427 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994d05f0-3e65-425a-8bd9-a2b8c980f9ac" path="/var/lib/kubelet/pods/994d05f0-3e65-425a-8bd9-a2b8c980f9ac/volumes" Mar 12 15:05:46 crc kubenswrapper[4832]: I0312 15:05:46.747378 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 15:05:46 crc kubenswrapper[4832]: I0312 15:05:46.747448 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 15:05:46 crc kubenswrapper[4832]: I0312 15:05:46.866189 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 15:05:47 crc kubenswrapper[4832]: I0312 15:05:47.299814 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 12 15:05:47 crc kubenswrapper[4832]: I0312 15:05:47.379867 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 15:05:48 crc kubenswrapper[4832]: I0312 15:05:48.087374 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:48 crc kubenswrapper[4832]: I0312 15:05:48.087439 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:48 crc kubenswrapper[4832]: I0312 15:05:48.169429 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:48 crc kubenswrapper[4832]: I0312 15:05:48.378671 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.443630 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4cbc-account-create-update-q4tzf"] Mar 12 15:05:49 crc kubenswrapper[4832]: E0312 15:05:49.444299 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994d05f0-3e65-425a-8bd9-a2b8c980f9ac" containerName="init" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.444314 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="994d05f0-3e65-425a-8bd9-a2b8c980f9ac" containerName="init" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.444577 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="994d05f0-3e65-425a-8bd9-a2b8c980f9ac" containerName="init" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.445165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.447208 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.462535 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4cbc-account-create-update-q4tzf"] Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.486446 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jvlmr"] Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.487877 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.498152 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jvlmr"] Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.576383 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27x8g\" (UniqueName: \"kubernetes.io/projected/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-kube-api-access-27x8g\") pod \"keystone-4cbc-account-create-update-q4tzf\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.576476 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-operator-scripts\") pod \"keystone-4cbc-account-create-update-q4tzf\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.576546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmbm\" (UniqueName: \"kubernetes.io/projected/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-kube-api-access-pdmbm\") pod \"keystone-db-create-jvlmr\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.576606 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-operator-scripts\") pod \"keystone-db-create-jvlmr\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.640365 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-slgfx"] Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.641493 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.646933 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c091-account-create-update-5bw46"] Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.648071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.650193 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.656166 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-slgfx"] Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.670762 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c091-account-create-update-5bw46"] Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.681629 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-operator-scripts\") pod \"keystone-4cbc-account-create-update-q4tzf\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.681883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmbm\" (UniqueName: \"kubernetes.io/projected/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-kube-api-access-pdmbm\") pod \"keystone-db-create-jvlmr\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.681999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26b8772-ff30-45b4-a167-df95b7051fe3-operator-scripts\") pod \"placement-db-create-slgfx\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.682156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwswb\" (UniqueName: \"kubernetes.io/projected/c26b8772-ff30-45b4-a167-df95b7051fe3-kube-api-access-zwswb\") pod \"placement-db-create-slgfx\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.682286 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-operator-scripts\") pod \"keystone-4cbc-account-create-update-q4tzf\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.682293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-operator-scripts\") pod \"keystone-db-create-jvlmr\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.688821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27x8g\" (UniqueName: \"kubernetes.io/projected/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-kube-api-access-27x8g\") pod \"keystone-4cbc-account-create-update-q4tzf\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.689447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-operator-scripts\") pod \"keystone-db-create-jvlmr\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.713407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27x8g\" (UniqueName: \"kubernetes.io/projected/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-kube-api-access-27x8g\") pod \"keystone-4cbc-account-create-update-q4tzf\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.739082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmbm\" (UniqueName: \"kubernetes.io/projected/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-kube-api-access-pdmbm\") pod \"keystone-db-create-jvlmr\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.775179 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.790268 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a982f3-9124-416e-b0b1-199a57462954-operator-scripts\") pod \"placement-c091-account-create-update-5bw46\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.790317 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hkx\" (UniqueName: \"kubernetes.io/projected/44a982f3-9124-416e-b0b1-199a57462954-kube-api-access-r2hkx\") pod \"placement-c091-account-create-update-5bw46\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.790392 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26b8772-ff30-45b4-a167-df95b7051fe3-operator-scripts\") pod \"placement-db-create-slgfx\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.790429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwswb\" (UniqueName: \"kubernetes.io/projected/c26b8772-ff30-45b4-a167-df95b7051fe3-kube-api-access-zwswb\") pod \"placement-db-create-slgfx\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.791335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26b8772-ff30-45b4-a167-df95b7051fe3-operator-scripts\") pod \"placement-db-create-slgfx\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.807066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwswb\" (UniqueName: \"kubernetes.io/projected/c26b8772-ff30-45b4-a167-df95b7051fe3-kube-api-access-zwswb\") pod \"placement-db-create-slgfx\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.818436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.893633 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a982f3-9124-416e-b0b1-199a57462954-operator-scripts\") pod \"placement-c091-account-create-update-5bw46\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.893686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2hkx\" (UniqueName: \"kubernetes.io/projected/44a982f3-9124-416e-b0b1-199a57462954-kube-api-access-r2hkx\") pod \"placement-c091-account-create-update-5bw46\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.894580 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a982f3-9124-416e-b0b1-199a57462954-operator-scripts\") pod \"placement-c091-account-create-update-5bw46\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.911552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2hkx\" (UniqueName: \"kubernetes.io/projected/44a982f3-9124-416e-b0b1-199a57462954-kube-api-access-r2hkx\") pod \"placement-c091-account-create-update-5bw46\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.962579 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-slgfx" Mar 12 15:05:49 crc kubenswrapper[4832]: I0312 15:05:49.974656 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.225202 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4cbc-account-create-update-q4tzf"] Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.239655 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-slgfx"] Mar 12 15:05:50 crc kubenswrapper[4832]: W0312 15:05:50.243850 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3e6885_9e7b_4a4a_b237_3233fcbe2129.slice/crio-9ac3c6ea15cb890d5940f17f1705aa7d45df11a4a61acaf7ec5947d56312180f WatchSource:0}: Error finding container 9ac3c6ea15cb890d5940f17f1705aa7d45df11a4a61acaf7ec5947d56312180f: Status 404 returned error can't find the container with id 9ac3c6ea15cb890d5940f17f1705aa7d45df11a4a61acaf7ec5947d56312180f Mar 12 15:05:50 crc kubenswrapper[4832]: W0312 15:05:50.252248 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26b8772_ff30_45b4_a167_df95b7051fe3.slice/crio-bf64c83e77a9546d2c8f9836ba71c9388a948d6db19cb9384cde3f557fdbd6fe WatchSource:0}: Error finding container bf64c83e77a9546d2c8f9836ba71c9388a948d6db19cb9384cde3f557fdbd6fe: Status 404 returned error can't find the container with id bf64c83e77a9546d2c8f9836ba71c9388a948d6db19cb9384cde3f557fdbd6fe Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.270010 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c091-account-create-update-5bw46"] Mar 12 15:05:50 crc kubenswrapper[4832]: W0312 15:05:50.284350 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44a982f3_9124_416e_b0b1_199a57462954.slice/crio-422a37a46ef3119b0e39c954ae94f39b94a52ec210fd30bf338f969cfad56574 WatchSource:0}: Error finding container 422a37a46ef3119b0e39c954ae94f39b94a52ec210fd30bf338f969cfad56574: Status 404 returned error can't find the container with id 422a37a46ef3119b0e39c954ae94f39b94a52ec210fd30bf338f969cfad56574 Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.317981 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jvlmr"] Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.320112 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c091-account-create-update-5bw46" event={"ID":"44a982f3-9124-416e-b0b1-199a57462954","Type":"ContainerStarted","Data":"422a37a46ef3119b0e39c954ae94f39b94a52ec210fd30bf338f969cfad56574"} Mar 12 15:05:50 crc kubenswrapper[4832]: W0312 15:05:50.321751 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f3430c0_f2f1_48bb_ae3f_2337f5ea30de.slice/crio-24df4c843785f8a2407c8942cc77b7f421c4e32e60ca8bb157ff27e4c199d5d9 WatchSource:0}: Error finding container 24df4c843785f8a2407c8942cc77b7f421c4e32e60ca8bb157ff27e4c199d5d9: Status 404 returned error can't find the container with id 24df4c843785f8a2407c8942cc77b7f421c4e32e60ca8bb157ff27e4c199d5d9 Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.321919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-slgfx" event={"ID":"c26b8772-ff30-45b4-a167-df95b7051fe3","Type":"ContainerStarted","Data":"bf64c83e77a9546d2c8f9836ba71c9388a948d6db19cb9384cde3f557fdbd6fe"} Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.323813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4cbc-account-create-update-q4tzf" event={"ID":"bb3e6885-9e7b-4a4a-b237-3233fcbe2129","Type":"ContainerStarted","Data":"9ac3c6ea15cb890d5940f17f1705aa7d45df11a4a61acaf7ec5947d56312180f"} Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.532455 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lg46l"] Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.532764 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-lg46l" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerName="dnsmasq-dns" containerID="cri-o://f8ab975f49e1daa1b7b54f67d387974ef6d60fc65ee8a9102a751775ec4a3764" gracePeriod=10 Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.537119 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.575911 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4hx64"] Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.578679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.587068 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4hx64"] Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.710707 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.710795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.711101 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh8mt\" (UniqueName: \"kubernetes.io/projected/e40347b7-ed6a-49e8-af1b-d361a332bf94-kube-api-access-vh8mt\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.711149 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.711198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-config\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.812886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.813198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.813261 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh8mt\" (UniqueName: \"kubernetes.io/projected/e40347b7-ed6a-49e8-af1b-d361a332bf94-kube-api-access-vh8mt\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.813281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.813309 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-config\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.814174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-config\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.814391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.814438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.814721 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.831723 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh8mt\" (UniqueName: \"kubernetes.io/projected/e40347b7-ed6a-49e8-af1b-d361a332bf94-kube-api-access-vh8mt\") pod \"dnsmasq-dns-b8fbc5445-4hx64\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:50 crc kubenswrapper[4832]: I0312 15:05:50.915998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.334846 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jvlmr" event={"ID":"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de","Type":"ContainerStarted","Data":"24df4c843785f8a2407c8942cc77b7f421c4e32e60ca8bb157ff27e4c199d5d9"} Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.337203 4832 generic.go:334] "Generic (PLEG): container finished" podID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerID="f8ab975f49e1daa1b7b54f67d387974ef6d60fc65ee8a9102a751775ec4a3764" exitCode=0 Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.337234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lg46l" event={"ID":"2861e1f8-fb7a-4fca-8180-d0c561241aa6","Type":"ContainerDied","Data":"f8ab975f49e1daa1b7b54f67d387974ef6d60fc65ee8a9102a751775ec4a3764"} Mar 12 15:05:51 crc kubenswrapper[4832]: W0312 15:05:51.343757 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode40347b7_ed6a_49e8_af1b_d361a332bf94.slice/crio-ab46b9523981e6d26173e665c95a0b3971ab5aab6955683972fd8053dfde19e2 WatchSource:0}: Error finding container ab46b9523981e6d26173e665c95a0b3971ab5aab6955683972fd8053dfde19e2: Status 404 returned error can't find the container with id ab46b9523981e6d26173e665c95a0b3971ab5aab6955683972fd8053dfde19e2 Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.349939 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4hx64"] Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.673567 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.680731 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.688147 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-jpt5h" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.688213 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.688464 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.688600 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.716758 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.728009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-cache\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.728110 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.728182 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.728308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnnm\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-kube-api-access-rlnnm\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.728573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-lock\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.728628 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.829692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-cache\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.830013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.830037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.830070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlnnm\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-kube-api-access-rlnnm\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.830121 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-lock\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.830140 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.830197 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-cache\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: E0312 15:05:51.830291 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 15:05:51 crc kubenswrapper[4832]: E0312 15:05:51.830307 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 15:05:51 crc kubenswrapper[4832]: E0312 15:05:51.830354 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift podName:c2fcebd5-a8cd-4290-9055-e0a7bbec2854 nodeName:}" failed. No retries permitted until 2026-03-12 15:05:52.330338335 +0000 UTC m=+1110.974352561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift") pod "swift-storage-0" (UID: "c2fcebd5-a8cd-4290-9055-e0a7bbec2854") : configmap "swift-ring-files" not found Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.830500 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.831071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-lock\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.837080 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.847297 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlnnm\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-kube-api-access-rlnnm\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:51 crc kubenswrapper[4832]: I0312 15:05:51.860055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:52 crc kubenswrapper[4832]: I0312 15:05:52.339796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:52 crc kubenswrapper[4832]: E0312 15:05:52.339995 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 15:05:52 crc kubenswrapper[4832]: E0312 15:05:52.340010 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 15:05:52 crc kubenswrapper[4832]: E0312 15:05:52.340054 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift podName:c2fcebd5-a8cd-4290-9055-e0a7bbec2854 nodeName:}" failed. No retries permitted until 2026-03-12 15:05:53.34003903 +0000 UTC m=+1111.984053246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift") pod "swift-storage-0" (UID: "c2fcebd5-a8cd-4290-9055-e0a7bbec2854") : configmap "swift-ring-files" not found Mar 12 15:05:52 crc kubenswrapper[4832]: I0312 15:05:52.347479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" event={"ID":"e40347b7-ed6a-49e8-af1b-d361a332bf94","Type":"ContainerStarted","Data":"ab46b9523981e6d26173e665c95a0b3971ab5aab6955683972fd8053dfde19e2"} Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.356114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:53 crc kubenswrapper[4832]: E0312 15:05:53.356338 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 15:05:53 crc kubenswrapper[4832]: E0312 15:05:53.356556 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 15:05:53 crc kubenswrapper[4832]: E0312 15:05:53.356646 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift podName:c2fcebd5-a8cd-4290-9055-e0a7bbec2854 nodeName:}" failed. No retries permitted until 2026-03-12 15:05:55.35661797 +0000 UTC m=+1114.000632236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift") pod "swift-storage-0" (UID: "c2fcebd5-a8cd-4290-9055-e0a7bbec2854") : configmap "swift-ring-files" not found Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.358829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lg46l" event={"ID":"2861e1f8-fb7a-4fca-8180-d0c561241aa6","Type":"ContainerDied","Data":"06ad8a4df55d79059aad62d4e9da51edda5b54a2b994168b3c0ed7a569fd6ad4"} Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.358887 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ad8a4df55d79059aad62d4e9da51edda5b54a2b994168b3c0ed7a569fd6ad4" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.360440 4832 generic.go:334] "Generic (PLEG): container finished" podID="c26b8772-ff30-45b4-a167-df95b7051fe3" containerID="bcfa915cd490c630db2abc2c0e37dad1a1b7815453d1739fb01f6570e26fc849" exitCode=0 Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.360545 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-slgfx" event={"ID":"c26b8772-ff30-45b4-a167-df95b7051fe3","Type":"ContainerDied","Data":"bcfa915cd490c630db2abc2c0e37dad1a1b7815453d1739fb01f6570e26fc849"} Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.362679 4832 generic.go:334] "Generic (PLEG): container finished" podID="bb3e6885-9e7b-4a4a-b237-3233fcbe2129" containerID="37da2e82dbc02e5d6021889807d0faa9781ec85b5448a46fa01edc9515ec1f54" exitCode=0 Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.362725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4cbc-account-create-update-q4tzf" event={"ID":"bb3e6885-9e7b-4a4a-b237-3233fcbe2129","Type":"ContainerDied","Data":"37da2e82dbc02e5d6021889807d0faa9781ec85b5448a46fa01edc9515ec1f54"} Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.364448 4832 generic.go:334] "Generic (PLEG): container finished" podID="6f3430c0-f2f1-48bb-ae3f-2337f5ea30de" containerID="6446febc605dfdb695dcd7263ca8a17196203857c35e846e96842e2bf9504d36" exitCode=0 Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.364554 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jvlmr" event={"ID":"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de","Type":"ContainerDied","Data":"6446febc605dfdb695dcd7263ca8a17196203857c35e846e96842e2bf9504d36"} Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.365904 4832 generic.go:334] "Generic (PLEG): container finished" podID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerID="3e61315a6e94c39e4920a73312884690e4d5ad25bf55d03d3dd568e73238bcae" exitCode=0 Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.365961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" event={"ID":"e40347b7-ed6a-49e8-af1b-d361a332bf94","Type":"ContainerDied","Data":"3e61315a6e94c39e4920a73312884690e4d5ad25bf55d03d3dd568e73238bcae"} Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.368824 4832 generic.go:334] "Generic (PLEG): container finished" podID="44a982f3-9124-416e-b0b1-199a57462954" containerID="c7ef3fce7bf4a3495d4155eb1878ea12e2faca82b721740b4821bf877012e0ee" exitCode=0 Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.368885 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c091-account-create-update-5bw46" event={"ID":"44a982f3-9124-416e-b0b1-199a57462954","Type":"ContainerDied","Data":"c7ef3fce7bf4a3495d4155eb1878ea12e2faca82b721740b4821bf877012e0ee"} Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.487959 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.559248 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-dns-svc\") pod \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.559352 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-config\") pod \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.559547 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-nb\") pod \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.559664 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf5h5\" (UniqueName: \"kubernetes.io/projected/2861e1f8-fb7a-4fca-8180-d0c561241aa6-kube-api-access-xf5h5\") pod \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.559762 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-sb\") pod \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\" (UID: \"2861e1f8-fb7a-4fca-8180-d0c561241aa6\") " Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.569702 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2861e1f8-fb7a-4fca-8180-d0c561241aa6-kube-api-access-xf5h5" (OuterVolumeSpecName: "kube-api-access-xf5h5") pod "2861e1f8-fb7a-4fca-8180-d0c561241aa6" (UID: "2861e1f8-fb7a-4fca-8180-d0c561241aa6"). InnerVolumeSpecName "kube-api-access-xf5h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.590541 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pjxk7"] Mar 12 15:05:53 crc kubenswrapper[4832]: E0312 15:05:53.590959 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerName="dnsmasq-dns" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.590991 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerName="dnsmasq-dns" Mar 12 15:05:53 crc kubenswrapper[4832]: E0312 15:05:53.591028 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerName="init" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.591039 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerName="init" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.591294 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerName="dnsmasq-dns" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.595020 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.599091 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pjxk7"] Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.623558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2861e1f8-fb7a-4fca-8180-d0c561241aa6" (UID: "2861e1f8-fb7a-4fca-8180-d0c561241aa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.635985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2861e1f8-fb7a-4fca-8180-d0c561241aa6" (UID: "2861e1f8-fb7a-4fca-8180-d0c561241aa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.639807 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-config" (OuterVolumeSpecName: "config") pod "2861e1f8-fb7a-4fca-8180-d0c561241aa6" (UID: "2861e1f8-fb7a-4fca-8180-d0c561241aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.645633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2861e1f8-fb7a-4fca-8180-d0c561241aa6" (UID: "2861e1f8-fb7a-4fca-8180-d0c561241aa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.664176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54f9r\" (UniqueName: \"kubernetes.io/projected/9e44c919-5e7d-4b58-85d7-919cd52679e2-kube-api-access-54f9r\") pod \"glance-db-create-pjxk7\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.664407 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e44c919-5e7d-4b58-85d7-919cd52679e2-operator-scripts\") pod \"glance-db-create-pjxk7\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.664636 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.664687 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf5h5\" (UniqueName: \"kubernetes.io/projected/2861e1f8-fb7a-4fca-8180-d0c561241aa6-kube-api-access-xf5h5\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.664742 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.670719 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.670736 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2861e1f8-fb7a-4fca-8180-d0c561241aa6-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.690662 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e0bf-account-create-update-8zpsg"] Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.691899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.694283 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.698056 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e0bf-account-create-update-8zpsg"] Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.772264 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qcs\" (UniqueName: \"kubernetes.io/projected/3ae29670-d754-4a65-b982-146c9f8e8f59-kube-api-access-28qcs\") pod \"glance-e0bf-account-create-update-8zpsg\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.772447 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e44c919-5e7d-4b58-85d7-919cd52679e2-operator-scripts\") pod \"glance-db-create-pjxk7\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.772924 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54f9r\" (UniqueName: \"kubernetes.io/projected/9e44c919-5e7d-4b58-85d7-919cd52679e2-kube-api-access-54f9r\") pod \"glance-db-create-pjxk7\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.773001 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae29670-d754-4a65-b982-146c9f8e8f59-operator-scripts\") pod \"glance-e0bf-account-create-update-8zpsg\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.773456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e44c919-5e7d-4b58-85d7-919cd52679e2-operator-scripts\") pod \"glance-db-create-pjxk7\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.793031 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54f9r\" (UniqueName: \"kubernetes.io/projected/9e44c919-5e7d-4b58-85d7-919cd52679e2-kube-api-access-54f9r\") pod \"glance-db-create-pjxk7\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.875455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae29670-d754-4a65-b982-146c9f8e8f59-operator-scripts\") pod \"glance-e0bf-account-create-update-8zpsg\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.875797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qcs\" (UniqueName: \"kubernetes.io/projected/3ae29670-d754-4a65-b982-146c9f8e8f59-kube-api-access-28qcs\") pod \"glance-e0bf-account-create-update-8zpsg\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.876803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae29670-d754-4a65-b982-146c9f8e8f59-operator-scripts\") pod \"glance-e0bf-account-create-update-8zpsg\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.906120 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qcs\" (UniqueName: \"kubernetes.io/projected/3ae29670-d754-4a65-b982-146c9f8e8f59-kube-api-access-28qcs\") pod \"glance-e0bf-account-create-update-8zpsg\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:53 crc kubenswrapper[4832]: I0312 15:05:53.930449 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.008711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.387892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" event={"ID":"e40347b7-ed6a-49e8-af1b-d361a332bf94","Type":"ContainerStarted","Data":"aabb2defc601f3d3797ce0b64b067866a91e85c962c904780f3178a960455189"} Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.389070 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lg46l" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.389397 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.412386 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pjxk7"] Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.414143 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" podStartSLOduration=4.414119347 podStartE2EDuration="4.414119347s" podCreationTimestamp="2026-03-12 15:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:05:54.409817782 +0000 UTC m=+1113.053832008" watchObservedRunningTime="2026-03-12 15:05:54.414119347 +0000 UTC m=+1113.058133593" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.439817 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lg46l"] Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.447397 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lg46l"] Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.521920 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e0bf-account-create-update-8zpsg"] Mar 12 15:05:54 crc kubenswrapper[4832]: W0312 15:05:54.557103 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae29670_d754_4a65_b982_146c9f8e8f59.slice/crio-017f15b861ef104227d2ca4d73a8365fc1d2fc867af069a179b228eb2996e654 WatchSource:0}: Error finding container 017f15b861ef104227d2ca4d73a8365fc1d2fc867af069a179b228eb2996e654: Status 404 returned error can't find the container with id 017f15b861ef104227d2ca4d73a8365fc1d2fc867af069a179b228eb2996e654 Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.632906 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" path="/var/lib/kubelet/pods/2861e1f8-fb7a-4fca-8180-d0c561241aa6/volumes" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.747665 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.792947 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-operator-scripts\") pod \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.793102 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27x8g\" (UniqueName: \"kubernetes.io/projected/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-kube-api-access-27x8g\") pod \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\" (UID: \"bb3e6885-9e7b-4a4a-b237-3233fcbe2129\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.794939 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb3e6885-9e7b-4a4a-b237-3233fcbe2129" (UID: "bb3e6885-9e7b-4a4a-b237-3233fcbe2129"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.798694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-kube-api-access-27x8g" (OuterVolumeSpecName: "kube-api-access-27x8g") pod "bb3e6885-9e7b-4a4a-b237-3233fcbe2129" (UID: "bb3e6885-9e7b-4a4a-b237-3233fcbe2129"). InnerVolumeSpecName "kube-api-access-27x8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.895468 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27x8g\" (UniqueName: \"kubernetes.io/projected/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-kube-api-access-27x8g\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.895523 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb3e6885-9e7b-4a4a-b237-3233fcbe2129-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.903778 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-slgfx" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.937583 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.943879 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.996063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwswb\" (UniqueName: \"kubernetes.io/projected/c26b8772-ff30-45b4-a167-df95b7051fe3-kube-api-access-zwswb\") pod \"c26b8772-ff30-45b4-a167-df95b7051fe3\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.996141 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26b8772-ff30-45b4-a167-df95b7051fe3-operator-scripts\") pod \"c26b8772-ff30-45b4-a167-df95b7051fe3\" (UID: \"c26b8772-ff30-45b4-a167-df95b7051fe3\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.996169 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-operator-scripts\") pod \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.996199 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmbm\" (UniqueName: \"kubernetes.io/projected/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-kube-api-access-pdmbm\") pod \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\" (UID: \"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.996256 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2hkx\" (UniqueName: \"kubernetes.io/projected/44a982f3-9124-416e-b0b1-199a57462954-kube-api-access-r2hkx\") pod \"44a982f3-9124-416e-b0b1-199a57462954\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.996313 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a982f3-9124-416e-b0b1-199a57462954-operator-scripts\") pod \"44a982f3-9124-416e-b0b1-199a57462954\" (UID: \"44a982f3-9124-416e-b0b1-199a57462954\") " Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.997741 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f3430c0-f2f1-48bb-ae3f-2337f5ea30de" (UID: "6f3430c0-f2f1-48bb-ae3f-2337f5ea30de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:54 crc kubenswrapper[4832]: I0312 15:05:54.997848 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a982f3-9124-416e-b0b1-199a57462954-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44a982f3-9124-416e-b0b1-199a57462954" (UID: "44a982f3-9124-416e-b0b1-199a57462954"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.000883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26b8772-ff30-45b4-a167-df95b7051fe3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c26b8772-ff30-45b4-a167-df95b7051fe3" (UID: "c26b8772-ff30-45b4-a167-df95b7051fe3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.001078 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-kube-api-access-pdmbm" (OuterVolumeSpecName: "kube-api-access-pdmbm") pod "6f3430c0-f2f1-48bb-ae3f-2337f5ea30de" (UID: "6f3430c0-f2f1-48bb-ae3f-2337f5ea30de"). InnerVolumeSpecName "kube-api-access-pdmbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.001725 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a982f3-9124-416e-b0b1-199a57462954-kube-api-access-r2hkx" (OuterVolumeSpecName: "kube-api-access-r2hkx") pod "44a982f3-9124-416e-b0b1-199a57462954" (UID: "44a982f3-9124-416e-b0b1-199a57462954"). InnerVolumeSpecName "kube-api-access-r2hkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.002010 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26b8772-ff30-45b4-a167-df95b7051fe3-kube-api-access-zwswb" (OuterVolumeSpecName: "kube-api-access-zwswb") pod "c26b8772-ff30-45b4-a167-df95b7051fe3" (UID: "c26b8772-ff30-45b4-a167-df95b7051fe3"). InnerVolumeSpecName "kube-api-access-zwswb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.098609 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmbm\" (UniqueName: \"kubernetes.io/projected/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-kube-api-access-pdmbm\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.098638 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2hkx\" (UniqueName: \"kubernetes.io/projected/44a982f3-9124-416e-b0b1-199a57462954-kube-api-access-r2hkx\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.098648 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a982f3-9124-416e-b0b1-199a57462954-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.098658 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwswb\" (UniqueName: \"kubernetes.io/projected/c26b8772-ff30-45b4-a167-df95b7051fe3-kube-api-access-zwswb\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.098666 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26b8772-ff30-45b4-a167-df95b7051fe3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.098674 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.404250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.404667 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c091-account-create-update-5bw46" event={"ID":"44a982f3-9124-416e-b0b1-199a57462954","Type":"ContainerDied","Data":"422a37a46ef3119b0e39c954ae94f39b94a52ec210fd30bf338f969cfad56574"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.404732 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="422a37a46ef3119b0e39c954ae94f39b94a52ec210fd30bf338f969cfad56574" Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.404760 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.404790 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.404850 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c091-account-create-update-5bw46" Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.404876 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift podName:c2fcebd5-a8cd-4290-9055-e0a7bbec2854 nodeName:}" failed. No retries permitted until 2026-03-12 15:05:59.404843256 +0000 UTC m=+1118.048857522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift") pod "swift-storage-0" (UID: "c2fcebd5-a8cd-4290-9055-e0a7bbec2854") : configmap "swift-ring-files" not found Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.408129 4832 generic.go:334] "Generic (PLEG): container finished" podID="9e44c919-5e7d-4b58-85d7-919cd52679e2" containerID="5ec319f77b298eabd923f2465ee71737df381285e38544ac1ae0e2a410298850" exitCode=0 Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.408198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pjxk7" event={"ID":"9e44c919-5e7d-4b58-85d7-919cd52679e2","Type":"ContainerDied","Data":"5ec319f77b298eabd923f2465ee71737df381285e38544ac1ae0e2a410298850"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.408250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pjxk7" event={"ID":"9e44c919-5e7d-4b58-85d7-919cd52679e2","Type":"ContainerStarted","Data":"14b96baaf2bf7458ef2f9be44334ba0293781c908860efe29696528dffd52d09"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.409908 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fcfwb"] Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.410298 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3e6885-9e7b-4a4a-b237-3233fcbe2129" containerName="mariadb-account-create-update" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410318 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3e6885-9e7b-4a4a-b237-3233fcbe2129" containerName="mariadb-account-create-update" Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.410342 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a982f3-9124-416e-b0b1-199a57462954" containerName="mariadb-account-create-update" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410352 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a982f3-9124-416e-b0b1-199a57462954" containerName="mariadb-account-create-update" Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.410363 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3430c0-f2f1-48bb-ae3f-2337f5ea30de" containerName="mariadb-database-create" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410373 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3430c0-f2f1-48bb-ae3f-2337f5ea30de" containerName="mariadb-database-create" Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.410388 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26b8772-ff30-45b4-a167-df95b7051fe3" containerName="mariadb-database-create" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410399 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26b8772-ff30-45b4-a167-df95b7051fe3" containerName="mariadb-database-create" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410651 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26b8772-ff30-45b4-a167-df95b7051fe3" containerName="mariadb-database-create" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410675 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3430c0-f2f1-48bb-ae3f-2337f5ea30de" containerName="mariadb-database-create" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410695 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a982f3-9124-416e-b0b1-199a57462954" containerName="mariadb-account-create-update" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.410709 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3e6885-9e7b-4a4a-b237-3233fcbe2129" containerName="mariadb-account-create-update" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.411306 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.412161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-slgfx" event={"ID":"c26b8772-ff30-45b4-a167-df95b7051fe3","Type":"ContainerDied","Data":"bf64c83e77a9546d2c8f9836ba71c9388a948d6db19cb9384cde3f557fdbd6fe"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.412185 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf64c83e77a9546d2c8f9836ba71c9388a948d6db19cb9384cde3f557fdbd6fe" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.412211 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-slgfx" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.413733 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4cbc-account-create-update-q4tzf" event={"ID":"bb3e6885-9e7b-4a4a-b237-3233fcbe2129","Type":"ContainerDied","Data":"9ac3c6ea15cb890d5940f17f1705aa7d45df11a4a61acaf7ec5947d56312180f"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.413768 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac3c6ea15cb890d5940f17f1705aa7d45df11a4a61acaf7ec5947d56312180f" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.413793 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4cbc-account-create-update-q4tzf" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.415142 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.415805 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvlmr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.415847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jvlmr" event={"ID":"6f3430c0-f2f1-48bb-ae3f-2337f5ea30de","Type":"ContainerDied","Data":"24df4c843785f8a2407c8942cc77b7f421c4e32e60ca8bb157ff27e4c199d5d9"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.415893 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24df4c843785f8a2407c8942cc77b7f421c4e32e60ca8bb157ff27e4c199d5d9" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.418408 4832 generic.go:334] "Generic (PLEG): container finished" podID="3ae29670-d754-4a65-b982-146c9f8e8f59" containerID="4d6c484a89af206a0bfb57072bb3fc0403a2da4308765b0e758cb6271072a3fd" exitCode=0 Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.418662 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0bf-account-create-update-8zpsg" event={"ID":"3ae29670-d754-4a65-b982-146c9f8e8f59","Type":"ContainerDied","Data":"4d6c484a89af206a0bfb57072bb3fc0403a2da4308765b0e758cb6271072a3fd"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.418758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0bf-account-create-update-8zpsg" event={"ID":"3ae29670-d754-4a65-b982-146c9f8e8f59","Type":"ContainerStarted","Data":"017f15b861ef104227d2ca4d73a8365fc1d2fc867af069a179b228eb2996e654"} Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.440588 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fcfwb"] Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.505904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5jx\" (UniqueName: \"kubernetes.io/projected/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-kube-api-access-rs5jx\") pod \"root-account-create-update-fcfwb\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.505982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-operator-scripts\") pod \"root-account-create-update-fcfwb\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.608667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5jx\" (UniqueName: \"kubernetes.io/projected/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-kube-api-access-rs5jx\") pod \"root-account-create-update-fcfwb\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.609093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-operator-scripts\") pod \"root-account-create-update-fcfwb\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.609746 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-operator-scripts\") pod \"root-account-create-update-fcfwb\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.627266 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5jx\" (UniqueName: \"kubernetes.io/projected/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-kube-api-access-rs5jx\") pod \"root-account-create-update-fcfwb\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.670557 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cc4rc"] Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.671566 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.682172 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.688874 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.693364 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.699767 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-cc4rc"] Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.718262 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-etc-swift\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.718334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwsk\" (UniqueName: \"kubernetes.io/projected/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-kube-api-access-skwsk\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.718432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-combined-ca-bundle\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.718468 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-ring-data-devices\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.718517 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-swiftconf\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.718543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-scripts\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.718640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-dispersionconf\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.728381 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dbzrr"] Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.729720 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: E0312 15:05:55.730207 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-skwsk ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-cc4rc" podUID="d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.750027 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.767421 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dbzrr"] Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.779721 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-cc4rc"] Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852292 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-dispersionconf\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-dispersionconf\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852387 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq72r\" (UniqueName: \"kubernetes.io/projected/02632af9-ed7f-4481-865f-698db662e6fe-kube-api-access-dq72r\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-swiftconf\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-etc-swift\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852547 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skwsk\" (UniqueName: \"kubernetes.io/projected/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-kube-api-access-skwsk\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-ring-data-devices\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852624 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-combined-ca-bundle\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852653 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-ring-data-devices\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852678 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-swiftconf\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-scripts\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-scripts\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852782 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-combined-ca-bundle\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.852811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02632af9-ed7f-4481-865f-698db662e6fe-etc-swift\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.859258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-scripts\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.859851 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-ring-data-devices\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.860801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-etc-swift\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.889614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-combined-ca-bundle\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.895785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-dispersionconf\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.900688 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skwsk\" (UniqueName: \"kubernetes.io/projected/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-kube-api-access-skwsk\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.908492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-swiftconf\") pod \"swift-ring-rebalance-cc4rc\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.953911 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-dispersionconf\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.953982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq72r\" (UniqueName: \"kubernetes.io/projected/02632af9-ed7f-4481-865f-698db662e6fe-kube-api-access-dq72r\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.954028 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-swiftconf\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.954746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-ring-data-devices\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.954819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-scripts\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.954838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-combined-ca-bundle\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.954855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02632af9-ed7f-4481-865f-698db662e6fe-etc-swift\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.955238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-ring-data-devices\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.955360 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02632af9-ed7f-4481-865f-698db662e6fe-etc-swift\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.955718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-scripts\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.956979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-dispersionconf\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.957748 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-swiftconf\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.958216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-combined-ca-bundle\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:55 crc kubenswrapper[4832]: I0312 15:05:55.994904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq72r\" (UniqueName: \"kubernetes.io/projected/02632af9-ed7f-4481-865f-698db662e6fe-kube-api-access-dq72r\") pod \"swift-ring-rebalance-dbzrr\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.056053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.315942 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fcfwb"] Mar 12 15:05:56 crc kubenswrapper[4832]: W0312 15:05:56.322788 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1814d5c9_d37f_470b_bdf9_e51a8c5a844b.slice/crio-8f7e6e699dc26a1a88112733e55c186784204c96317c1fce47d50b78001f34c0 WatchSource:0}: Error finding container 8f7e6e699dc26a1a88112733e55c186784204c96317c1fce47d50b78001f34c0: Status 404 returned error can't find the container with id 8f7e6e699dc26a1a88112733e55c186784204c96317c1fce47d50b78001f34c0 Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.434219 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.434248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fcfwb" event={"ID":"1814d5c9-d37f-470b-bdf9-e51a8c5a844b","Type":"ContainerStarted","Data":"8f7e6e699dc26a1a88112733e55c186784204c96317c1fce47d50b78001f34c0"} Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.446402 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.461931 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dbzrr"] Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.564275 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-dispersionconf\") pod \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.564414 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-swiftconf\") pod \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.564455 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-scripts\") pod \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.564484 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-etc-swift\") pod \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.564568 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skwsk\" (UniqueName: \"kubernetes.io/projected/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-kube-api-access-skwsk\") pod \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.564583 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-ring-data-devices\") pod \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.564636 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-combined-ca-bundle\") pod \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\" (UID: \"d1de1a2f-aa13-49fc-82c9-e80ad0d1d728\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.566257 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" (UID: "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.566590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" (UID: "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.566747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-scripts" (OuterVolumeSpecName: "scripts") pod "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" (UID: "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.571685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" (UID: "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.572715 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-kube-api-access-skwsk" (OuterVolumeSpecName: "kube-api-access-skwsk") pod "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" (UID: "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728"). InnerVolumeSpecName "kube-api-access-skwsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.573352 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" (UID: "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.578558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" (UID: "d1de1a2f-aa13-49fc-82c9-e80ad0d1d728"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.668874 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.668913 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.668927 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.668941 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.668955 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.668968 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skwsk\" (UniqueName: \"kubernetes.io/projected/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-kube-api-access-skwsk\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.668987 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.798690 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.871296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54f9r\" (UniqueName: \"kubernetes.io/projected/9e44c919-5e7d-4b58-85d7-919cd52679e2-kube-api-access-54f9r\") pod \"9e44c919-5e7d-4b58-85d7-919cd52679e2\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.871417 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e44c919-5e7d-4b58-85d7-919cd52679e2-operator-scripts\") pod \"9e44c919-5e7d-4b58-85d7-919cd52679e2\" (UID: \"9e44c919-5e7d-4b58-85d7-919cd52679e2\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.872009 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e44c919-5e7d-4b58-85d7-919cd52679e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e44c919-5e7d-4b58-85d7-919cd52679e2" (UID: "9e44c919-5e7d-4b58-85d7-919cd52679e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.875392 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e44c919-5e7d-4b58-85d7-919cd52679e2-kube-api-access-54f9r" (OuterVolumeSpecName: "kube-api-access-54f9r") pod "9e44c919-5e7d-4b58-85d7-919cd52679e2" (UID: "9e44c919-5e7d-4b58-85d7-919cd52679e2"). InnerVolumeSpecName "kube-api-access-54f9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.879287 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.973112 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae29670-d754-4a65-b982-146c9f8e8f59-operator-scripts\") pod \"3ae29670-d754-4a65-b982-146c9f8e8f59\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.973235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qcs\" (UniqueName: \"kubernetes.io/projected/3ae29670-d754-4a65-b982-146c9f8e8f59-kube-api-access-28qcs\") pod \"3ae29670-d754-4a65-b982-146c9f8e8f59\" (UID: \"3ae29670-d754-4a65-b982-146c9f8e8f59\") " Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.973649 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae29670-d754-4a65-b982-146c9f8e8f59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ae29670-d754-4a65-b982-146c9f8e8f59" (UID: "3ae29670-d754-4a65-b982-146c9f8e8f59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.973993 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae29670-d754-4a65-b982-146c9f8e8f59-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.974016 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54f9r\" (UniqueName: \"kubernetes.io/projected/9e44c919-5e7d-4b58-85d7-919cd52679e2-kube-api-access-54f9r\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.974028 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e44c919-5e7d-4b58-85d7-919cd52679e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:56 crc kubenswrapper[4832]: I0312 15:05:56.980037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae29670-d754-4a65-b982-146c9f8e8f59-kube-api-access-28qcs" (OuterVolumeSpecName: "kube-api-access-28qcs") pod "3ae29670-d754-4a65-b982-146c9f8e8f59" (UID: "3ae29670-d754-4a65-b982-146c9f8e8f59"). InnerVolumeSpecName "kube-api-access-28qcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.075794 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qcs\" (UniqueName: \"kubernetes.io/projected/3ae29670-d754-4a65-b982-146c9f8e8f59-kube-api-access-28qcs\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.443623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pjxk7" event={"ID":"9e44c919-5e7d-4b58-85d7-919cd52679e2","Type":"ContainerDied","Data":"14b96baaf2bf7458ef2f9be44334ba0293781c908860efe29696528dffd52d09"} Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.443666 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14b96baaf2bf7458ef2f9be44334ba0293781c908860efe29696528dffd52d09" Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.443688 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pjxk7" Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.449871 4832 generic.go:334] "Generic (PLEG): container finished" podID="1814d5c9-d37f-470b-bdf9-e51a8c5a844b" containerID="90b05e586bfcc3d7d46a9b7c6718bf3c89293b606da609d6fe24a423a89b17ba" exitCode=0 Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.449945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fcfwb" event={"ID":"1814d5c9-d37f-470b-bdf9-e51a8c5a844b","Type":"ContainerDied","Data":"90b05e586bfcc3d7d46a9b7c6718bf3c89293b606da609d6fe24a423a89b17ba"} Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.452597 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0bf-account-create-update-8zpsg" event={"ID":"3ae29670-d754-4a65-b982-146c9f8e8f59","Type":"ContainerDied","Data":"017f15b861ef104227d2ca4d73a8365fc1d2fc867af069a179b228eb2996e654"} Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.452624 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="017f15b861ef104227d2ca4d73a8365fc1d2fc867af069a179b228eb2996e654" Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.452656 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0bf-account-create-update-8zpsg" Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.455373 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cc4rc" Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.455419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzrr" event={"ID":"02632af9-ed7f-4481-865f-698db662e6fe","Type":"ContainerStarted","Data":"8c63c95250ea32819e18856b43d34868b815884ee99792905d10a5982aa1011a"} Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.522583 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-cc4rc"] Mar 12 15:05:57 crc kubenswrapper[4832]: I0312 15:05:57.530439 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-cc4rc"] Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.341249 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-lg46l" podUID="2861e1f8-fb7a-4fca-8180-d0c561241aa6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.673186 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1de1a2f-aa13-49fc-82c9-e80ad0d1d728" path="/var/lib/kubelet/pods/d1de1a2f-aa13-49fc-82c9-e80ad0d1d728/volumes" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.830555 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-w2psf"] Mar 12 15:05:58 crc kubenswrapper[4832]: E0312 15:05:58.830945 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e44c919-5e7d-4b58-85d7-919cd52679e2" containerName="mariadb-database-create" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.830960 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e44c919-5e7d-4b58-85d7-919cd52679e2" containerName="mariadb-database-create" Mar 12 15:05:58 crc kubenswrapper[4832]: E0312 15:05:58.830984 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae29670-d754-4a65-b982-146c9f8e8f59" containerName="mariadb-account-create-update" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.830992 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae29670-d754-4a65-b982-146c9f8e8f59" containerName="mariadb-account-create-update" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.831179 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e44c919-5e7d-4b58-85d7-919cd52679e2" containerName="mariadb-database-create" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.831202 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae29670-d754-4a65-b982-146c9f8e8f59" containerName="mariadb-account-create-update" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.831807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.846250 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65cqt" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.846497 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.854131 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-w2psf"] Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.932754 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnfj8\" (UniqueName: \"kubernetes.io/projected/98264f95-8803-42bb-b985-359a3556a90c-kube-api-access-bnfj8\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.932919 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-db-sync-config-data\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.933002 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-config-data\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:58 crc kubenswrapper[4832]: I0312 15:05:58.933034 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-combined-ca-bundle\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.034554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-db-sync-config-data\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.034645 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-config-data\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.034676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-combined-ca-bundle\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.034707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnfj8\" (UniqueName: \"kubernetes.io/projected/98264f95-8803-42bb-b985-359a3556a90c-kube-api-access-bnfj8\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.041453 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-db-sync-config-data\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.042013 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-config-data\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.043288 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-combined-ca-bundle\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.053362 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnfj8\" (UniqueName: \"kubernetes.io/projected/98264f95-8803-42bb-b985-359a3556a90c-kube-api-access-bnfj8\") pod \"glance-db-sync-w2psf\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.167328 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w2psf" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.440521 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:05:59 crc kubenswrapper[4832]: E0312 15:05:59.440731 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 15:05:59 crc kubenswrapper[4832]: E0312 15:05:59.440771 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 15:05:59 crc kubenswrapper[4832]: E0312 15:05:59.440841 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift podName:c2fcebd5-a8cd-4290-9055-e0a7bbec2854 nodeName:}" failed. No retries permitted until 2026-03-12 15:06:07.440813603 +0000 UTC m=+1126.084827839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift") pod "swift-storage-0" (UID: "c2fcebd5-a8cd-4290-9055-e0a7bbec2854") : configmap "swift-ring-files" not found Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.687734 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fcfwb" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.747244 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs5jx\" (UniqueName: \"kubernetes.io/projected/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-kube-api-access-rs5jx\") pod \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.747772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-operator-scripts\") pod \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\" (UID: \"1814d5c9-d37f-470b-bdf9-e51a8c5a844b\") " Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.748738 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1814d5c9-d37f-470b-bdf9-e51a8c5a844b" (UID: "1814d5c9-d37f-470b-bdf9-e51a8c5a844b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.754706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-kube-api-access-rs5jx" (OuterVolumeSpecName: "kube-api-access-rs5jx") pod "1814d5c9-d37f-470b-bdf9-e51a8c5a844b" (UID: "1814d5c9-d37f-470b-bdf9-e51a8c5a844b"). InnerVolumeSpecName "kube-api-access-rs5jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.852548 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:05:59 crc kubenswrapper[4832]: I0312 15:05:59.852578 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs5jx\" (UniqueName: \"kubernetes.io/projected/1814d5c9-d37f-470b-bdf9-e51a8c5a844b-kube-api-access-rs5jx\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.134689 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7qgq7"] Mar 12 15:06:00 crc kubenswrapper[4832]: E0312 15:06:00.135247 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1814d5c9-d37f-470b-bdf9-e51a8c5a844b" containerName="mariadb-account-create-update" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.135264 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1814d5c9-d37f-470b-bdf9-e51a8c5a844b" containerName="mariadb-account-create-update" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.135435 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1814d5c9-d37f-470b-bdf9-e51a8c5a844b" containerName="mariadb-account-create-update" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.135980 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.137783 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.137991 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.139973 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.144168 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7qgq7"] Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.241259 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-w2psf"] Mar 12 15:06:00 crc kubenswrapper[4832]: W0312 15:06:00.247743 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98264f95_8803_42bb_b985_359a3556a90c.slice/crio-ea836f7dec9e2512766fad16fb7ea5c9a3a183653494b191e2f4c11b3a8add9e WatchSource:0}: Error finding container ea836f7dec9e2512766fad16fb7ea5c9a3a183653494b191e2f4c11b3a8add9e: Status 404 returned error can't find the container with id ea836f7dec9e2512766fad16fb7ea5c9a3a183653494b191e2f4c11b3a8add9e Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.258100 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8hc\" (UniqueName: \"kubernetes.io/projected/e4c0833b-5f89-41c6-99a8-a07a779a5e22-kube-api-access-bd8hc\") pod \"auto-csr-approver-29555466-7qgq7\" (UID: \"e4c0833b-5f89-41c6-99a8-a07a779a5e22\") " pod="openshift-infra/auto-csr-approver-29555466-7qgq7" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.360068 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8hc\" (UniqueName: \"kubernetes.io/projected/e4c0833b-5f89-41c6-99a8-a07a779a5e22-kube-api-access-bd8hc\") pod \"auto-csr-approver-29555466-7qgq7\" (UID: \"e4c0833b-5f89-41c6-99a8-a07a779a5e22\") " pod="openshift-infra/auto-csr-approver-29555466-7qgq7" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.392977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8hc\" (UniqueName: \"kubernetes.io/projected/e4c0833b-5f89-41c6-99a8-a07a779a5e22-kube-api-access-bd8hc\") pod \"auto-csr-approver-29555466-7qgq7\" (UID: \"e4c0833b-5f89-41c6-99a8-a07a779a5e22\") " pod="openshift-infra/auto-csr-approver-29555466-7qgq7" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.468132 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.483306 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fcfwb" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.483303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fcfwb" event={"ID":"1814d5c9-d37f-470b-bdf9-e51a8c5a844b","Type":"ContainerDied","Data":"8f7e6e699dc26a1a88112733e55c186784204c96317c1fce47d50b78001f34c0"} Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.483477 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7e6e699dc26a1a88112733e55c186784204c96317c1fce47d50b78001f34c0" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.486961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzrr" event={"ID":"02632af9-ed7f-4481-865f-698db662e6fe","Type":"ContainerStarted","Data":"66f5879d3fb87d29fc2e3d44974017a80d45165be054e357c0d5a7f7ecb02221"} Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.488153 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w2psf" event={"ID":"98264f95-8803-42bb-b985-359a3556a90c","Type":"ContainerStarted","Data":"ea836f7dec9e2512766fad16fb7ea5c9a3a183653494b191e2f4c11b3a8add9e"} Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.517325 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dbzrr" podStartSLOduration=2.370875177 podStartE2EDuration="5.51729746s" podCreationTimestamp="2026-03-12 15:05:55 +0000 UTC" firstStartedPulling="2026-03-12 15:05:56.475003429 +0000 UTC m=+1115.119017675" lastFinishedPulling="2026-03-12 15:05:59.621425732 +0000 UTC m=+1118.265439958" observedRunningTime="2026-03-12 15:06:00.506860187 +0000 UTC m=+1119.150874443" watchObservedRunningTime="2026-03-12 15:06:00.51729746 +0000 UTC m=+1119.161311696" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.917840 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.922077 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7qgq7"] Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.981002 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gxsg5"] Mar 12 15:06:00 crc kubenswrapper[4832]: I0312 15:06:00.981274 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" podUID="fd9336b7-c523-476b-b929-b750118653cd" containerName="dnsmasq-dns" containerID="cri-o://c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1" gracePeriod=10 Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.488527 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.497009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" event={"ID":"e4c0833b-5f89-41c6-99a8-a07a779a5e22","Type":"ContainerStarted","Data":"813c202cfbb3624874aa83a7b2665ce8a0ec1505f6d077a172c65a40e92706de"} Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.499112 4832 generic.go:334] "Generic (PLEG): container finished" podID="fd9336b7-c523-476b-b929-b750118653cd" containerID="c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1" exitCode=0 Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.499697 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.499695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" event={"ID":"fd9336b7-c523-476b-b929-b750118653cd","Type":"ContainerDied","Data":"c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1"} Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.499768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gxsg5" event={"ID":"fd9336b7-c523-476b-b929-b750118653cd","Type":"ContainerDied","Data":"63af1e8d4989936ae4176c0922cb293d8df3cd4f2ad5db09235719f535c47d2d"} Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.499789 4832 scope.go:117] "RemoveContainer" containerID="c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.537193 4832 scope.go:117] "RemoveContainer" containerID="8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.559552 4832 scope.go:117] "RemoveContainer" containerID="c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1" Mar 12 15:06:01 crc kubenswrapper[4832]: E0312 15:06:01.560920 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1\": container with ID starting with c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1 not found: ID does not exist" containerID="c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.561048 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1"} err="failed to get container status \"c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1\": rpc error: code = NotFound desc = could not find container \"c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1\": container with ID starting with c988ba424d2715a27a3137688064a85fd7ab57108c4f409a7c8e25370467aea1 not found: ID does not exist" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.561146 4832 scope.go:117] "RemoveContainer" containerID="8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da" Mar 12 15:06:01 crc kubenswrapper[4832]: E0312 15:06:01.561644 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da\": container with ID starting with 8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da not found: ID does not exist" containerID="8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.561690 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da"} err="failed to get container status \"8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da\": rpc error: code = NotFound desc = could not find container \"8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da\": container with ID starting with 8868573d67717861ece7fc771f1e299ce179168cebb885b1e50302b2829688da not found: ID does not exist" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.587761 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-dns-svc\") pod \"fd9336b7-c523-476b-b929-b750118653cd\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.587798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-config\") pod \"fd9336b7-c523-476b-b929-b750118653cd\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.587820 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zqp5\" (UniqueName: \"kubernetes.io/projected/fd9336b7-c523-476b-b929-b750118653cd-kube-api-access-8zqp5\") pod \"fd9336b7-c523-476b-b929-b750118653cd\" (UID: \"fd9336b7-c523-476b-b929-b750118653cd\") " Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.592740 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9336b7-c523-476b-b929-b750118653cd-kube-api-access-8zqp5" (OuterVolumeSpecName: "kube-api-access-8zqp5") pod "fd9336b7-c523-476b-b929-b750118653cd" (UID: "fd9336b7-c523-476b-b929-b750118653cd"). InnerVolumeSpecName "kube-api-access-8zqp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.631626 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd9336b7-c523-476b-b929-b750118653cd" (UID: "fd9336b7-c523-476b-b929-b750118653cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.641449 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-config" (OuterVolumeSpecName: "config") pod "fd9336b7-c523-476b-b929-b750118653cd" (UID: "fd9336b7-c523-476b-b929-b750118653cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.690285 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.690310 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9336b7-c523-476b-b929-b750118653cd-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.690319 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zqp5\" (UniqueName: \"kubernetes.io/projected/fd9336b7-c523-476b-b929-b750118653cd-kube-api-access-8zqp5\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.740351 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fcfwb"] Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.747616 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fcfwb"] Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.833267 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gxsg5"] Mar 12 15:06:01 crc kubenswrapper[4832]: I0312 15:06:01.841651 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gxsg5"] Mar 12 15:06:02 crc kubenswrapper[4832]: I0312 15:06:02.514514 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" event={"ID":"e4c0833b-5f89-41c6-99a8-a07a779a5e22","Type":"ContainerStarted","Data":"02efc6bf364a734269c3047c758f10f7d5f3200ea8dc50d3f30efb39a906e879"} Mar 12 15:06:02 crc kubenswrapper[4832]: I0312 15:06:02.638226 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1814d5c9-d37f-470b-bdf9-e51a8c5a844b" path="/var/lib/kubelet/pods/1814d5c9-d37f-470b-bdf9-e51a8c5a844b/volumes" Mar 12 15:06:02 crc kubenswrapper[4832]: I0312 15:06:02.639308 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9336b7-c523-476b-b929-b750118653cd" path="/var/lib/kubelet/pods/fd9336b7-c523-476b-b929-b750118653cd/volumes" Mar 12 15:06:03 crc kubenswrapper[4832]: I0312 15:06:03.534253 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4c0833b-5f89-41c6-99a8-a07a779a5e22" containerID="02efc6bf364a734269c3047c758f10f7d5f3200ea8dc50d3f30efb39a906e879" exitCode=0 Mar 12 15:06:03 crc kubenswrapper[4832]: I0312 15:06:03.534304 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" event={"ID":"e4c0833b-5f89-41c6-99a8-a07a779a5e22","Type":"ContainerDied","Data":"02efc6bf364a734269c3047c758f10f7d5f3200ea8dc50d3f30efb39a906e879"} Mar 12 15:06:03 crc kubenswrapper[4832]: I0312 15:06:03.919971 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.016156 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.024258 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd8hc\" (UniqueName: \"kubernetes.io/projected/e4c0833b-5f89-41c6-99a8-a07a779a5e22-kube-api-access-bd8hc\") pod \"e4c0833b-5f89-41c6-99a8-a07a779a5e22\" (UID: \"e4c0833b-5f89-41c6-99a8-a07a779a5e22\") " Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.042561 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c0833b-5f89-41c6-99a8-a07a779a5e22-kube-api-access-bd8hc" (OuterVolumeSpecName: "kube-api-access-bd8hc") pod "e4c0833b-5f89-41c6-99a8-a07a779a5e22" (UID: "e4c0833b-5f89-41c6-99a8-a07a779a5e22"). InnerVolumeSpecName "kube-api-access-bd8hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.126838 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd8hc\" (UniqueName: \"kubernetes.io/projected/e4c0833b-5f89-41c6-99a8-a07a779a5e22-kube-api-access-bd8hc\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.542069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" event={"ID":"e4c0833b-5f89-41c6-99a8-a07a779a5e22","Type":"ContainerDied","Data":"813c202cfbb3624874aa83a7b2665ce8a0ec1505f6d077a172c65a40e92706de"} Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.542135 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813c202cfbb3624874aa83a7b2665ce8a0ec1505f6d077a172c65a40e92706de" Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.542227 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7qgq7" Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.988144 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-vd8s7"] Mar 12 15:06:04 crc kubenswrapper[4832]: I0312 15:06:04.994077 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-vd8s7"] Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.435019 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rccst"] Mar 12 15:06:05 crc kubenswrapper[4832]: E0312 15:06:05.435472 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9336b7-c523-476b-b929-b750118653cd" containerName="dnsmasq-dns" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.435497 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9336b7-c523-476b-b929-b750118653cd" containerName="dnsmasq-dns" Mar 12 15:06:05 crc kubenswrapper[4832]: E0312 15:06:05.435548 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c0833b-5f89-41c6-99a8-a07a779a5e22" containerName="oc" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.435557 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c0833b-5f89-41c6-99a8-a07a779a5e22" containerName="oc" Mar 12 15:06:05 crc kubenswrapper[4832]: E0312 15:06:05.435574 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9336b7-c523-476b-b929-b750118653cd" containerName="init" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.435583 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9336b7-c523-476b-b929-b750118653cd" containerName="init" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.435781 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9336b7-c523-476b-b929-b750118653cd" containerName="dnsmasq-dns" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.435805 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c0833b-5f89-41c6-99a8-a07a779a5e22" containerName="oc" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.436454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rccst" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.442388 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rccst"] Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.443062 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.554756 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxqw\" (UniqueName: \"kubernetes.io/projected/31099f81-185e-4771-b5a9-978d09fac47a-kube-api-access-jxxqw\") pod \"root-account-create-update-rccst\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " pod="openstack/root-account-create-update-rccst" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.554822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31099f81-185e-4771-b5a9-978d09fac47a-operator-scripts\") pod \"root-account-create-update-rccst\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " pod="openstack/root-account-create-update-rccst" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.656373 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxqw\" (UniqueName: \"kubernetes.io/projected/31099f81-185e-4771-b5a9-978d09fac47a-kube-api-access-jxxqw\") pod \"root-account-create-update-rccst\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " pod="openstack/root-account-create-update-rccst" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.656421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31099f81-185e-4771-b5a9-978d09fac47a-operator-scripts\") pod \"root-account-create-update-rccst\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " pod="openstack/root-account-create-update-rccst" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.657424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31099f81-185e-4771-b5a9-978d09fac47a-operator-scripts\") pod \"root-account-create-update-rccst\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " pod="openstack/root-account-create-update-rccst" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.687732 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxqw\" (UniqueName: \"kubernetes.io/projected/31099f81-185e-4771-b5a9-978d09fac47a-kube-api-access-jxxqw\") pod \"root-account-create-update-rccst\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " pod="openstack/root-account-create-update-rccst" Mar 12 15:06:05 crc kubenswrapper[4832]: I0312 15:06:05.754862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rccst" Mar 12 15:06:06 crc kubenswrapper[4832]: I0312 15:06:06.628054 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269dd911-a1c0-453a-ac9b-0b235541e941" path="/var/lib/kubelet/pods/269dd911-a1c0-453a-ac9b-0b235541e941/volumes" Mar 12 15:06:07 crc kubenswrapper[4832]: I0312 15:06:07.490086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:06:07 crc kubenswrapper[4832]: I0312 15:06:07.510939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2fcebd5-a8cd-4290-9055-e0a7bbec2854-etc-swift\") pod \"swift-storage-0\" (UID: \"c2fcebd5-a8cd-4290-9055-e0a7bbec2854\") " pod="openstack/swift-storage-0" Mar 12 15:06:07 crc kubenswrapper[4832]: I0312 15:06:07.581582 4832 generic.go:334] "Generic (PLEG): container finished" podID="02632af9-ed7f-4481-865f-698db662e6fe" containerID="66f5879d3fb87d29fc2e3d44974017a80d45165be054e357c0d5a7f7ecb02221" exitCode=0 Mar 12 15:06:07 crc kubenswrapper[4832]: I0312 15:06:07.581642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzrr" event={"ID":"02632af9-ed7f-4481-865f-698db662e6fe","Type":"ContainerDied","Data":"66f5879d3fb87d29fc2e3d44974017a80d45165be054e357c0d5a7f7ecb02221"} Mar 12 15:06:07 crc kubenswrapper[4832]: I0312 15:06:07.610997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.197788 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6x8t6" podUID="9d4200a6-7cc2-4b4a-b01e-290567a2ec8c" containerName="ovn-controller" probeResult="failure" output=< Mar 12 15:06:09 crc kubenswrapper[4832]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 15:06:09 crc kubenswrapper[4832]: > Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.251133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.254806 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kmmfr" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.481338 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6x8t6-config-9bq54"] Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.490718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.494698 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.526906 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6x8t6-config-9bq54"] Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.600231 4832 generic.go:334] "Generic (PLEG): container finished" podID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerID="93b6dcd69cc1d58f2baf9259c24e904358b6c3d70cf097eb0116925f4f7421f6" exitCode=0 Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.600295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fef23d2a-252b-4733-bb4e-e83d5de2f4f4","Type":"ContainerDied","Data":"93b6dcd69cc1d58f2baf9259c24e904358b6c3d70cf097eb0116925f4f7421f6"} Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.602902 4832 generic.go:334] "Generic (PLEG): container finished" podID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerID="c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88" exitCode=0 Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.602960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"667d5405-474a-4ab3-bcbf-8fd5d1c179aa","Type":"ContainerDied","Data":"c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88"} Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.634727 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-additional-scripts\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.634778 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run-ovn\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.634797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-log-ovn\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.634892 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-scripts\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.634916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrfj\" (UniqueName: \"kubernetes.io/projected/15106abf-828a-4911-ba82-649cb3eadb5c-kube-api-access-tmrfj\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.634934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.737930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-additional-scripts\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.738591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run-ovn\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.738908 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-additional-scripts\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.738932 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run-ovn\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.738979 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-log-ovn\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.739062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-log-ovn\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.739556 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-scripts\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.739651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrfj\" (UniqueName: \"kubernetes.io/projected/15106abf-828a-4911-ba82-649cb3eadb5c-kube-api-access-tmrfj\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.739703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.739813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.742365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-scripts\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.763326 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmrfj\" (UniqueName: \"kubernetes.io/projected/15106abf-828a-4911-ba82-649cb3eadb5c-kube-api-access-tmrfj\") pod \"ovn-controller-6x8t6-config-9bq54\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:09 crc kubenswrapper[4832]: I0312 15:06:09.828458 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.352051 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.490004 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-scripts\") pod \"02632af9-ed7f-4481-865f-698db662e6fe\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.490394 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-dispersionconf\") pod \"02632af9-ed7f-4481-865f-698db662e6fe\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.490448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-ring-data-devices\") pod \"02632af9-ed7f-4481-865f-698db662e6fe\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.490477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02632af9-ed7f-4481-865f-698db662e6fe-etc-swift\") pod \"02632af9-ed7f-4481-865f-698db662e6fe\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.490590 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-swiftconf\") pod \"02632af9-ed7f-4481-865f-698db662e6fe\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.490653 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-combined-ca-bundle\") pod \"02632af9-ed7f-4481-865f-698db662e6fe\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.490699 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq72r\" (UniqueName: \"kubernetes.io/projected/02632af9-ed7f-4481-865f-698db662e6fe-kube-api-access-dq72r\") pod \"02632af9-ed7f-4481-865f-698db662e6fe\" (UID: \"02632af9-ed7f-4481-865f-698db662e6fe\") " Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.492747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "02632af9-ed7f-4481-865f-698db662e6fe" (UID: "02632af9-ed7f-4481-865f-698db662e6fe"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.495429 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02632af9-ed7f-4481-865f-698db662e6fe-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "02632af9-ed7f-4481-865f-698db662e6fe" (UID: "02632af9-ed7f-4481-865f-698db662e6fe"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.524398 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02632af9-ed7f-4481-865f-698db662e6fe-kube-api-access-dq72r" (OuterVolumeSpecName: "kube-api-access-dq72r") pod "02632af9-ed7f-4481-865f-698db662e6fe" (UID: "02632af9-ed7f-4481-865f-698db662e6fe"). InnerVolumeSpecName "kube-api-access-dq72r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.527050 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "02632af9-ed7f-4481-865f-698db662e6fe" (UID: "02632af9-ed7f-4481-865f-698db662e6fe"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.559577 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-scripts" (OuterVolumeSpecName: "scripts") pod "02632af9-ed7f-4481-865f-698db662e6fe" (UID: "02632af9-ed7f-4481-865f-698db662e6fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.565055 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02632af9-ed7f-4481-865f-698db662e6fe" (UID: "02632af9-ed7f-4481-865f-698db662e6fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.585342 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "02632af9-ed7f-4481-865f-698db662e6fe" (UID: "02632af9-ed7f-4481-865f-698db662e6fe"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.595588 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.595629 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.595644 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq72r\" (UniqueName: \"kubernetes.io/projected/02632af9-ed7f-4481-865f-698db662e6fe-kube-api-access-dq72r\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.595656 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.595668 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02632af9-ed7f-4481-865f-698db662e6fe-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.595681 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02632af9-ed7f-4481-865f-698db662e6fe-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.595692 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02632af9-ed7f-4481-865f-698db662e6fe-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.649419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fef23d2a-252b-4733-bb4e-e83d5de2f4f4","Type":"ContainerStarted","Data":"3c0588cb410895fd66ee00fd6cf380356c166231ccdfb260fbf0b23956184623"} Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.649684 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.653405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzrr" event={"ID":"02632af9-ed7f-4481-865f-698db662e6fe","Type":"ContainerDied","Data":"8c63c95250ea32819e18856b43d34868b815884ee99792905d10a5982aa1011a"} Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.653453 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c63c95250ea32819e18856b43d34868b815884ee99792905d10a5982aa1011a" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.653585 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzrr" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.656487 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"667d5405-474a-4ab3-bcbf-8fd5d1c179aa","Type":"ContainerStarted","Data":"acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801"} Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.656702 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.686940 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.948650436 podStartE2EDuration="58.686923884s" podCreationTimestamp="2026-03-12 15:05:14 +0000 UTC" firstStartedPulling="2026-03-12 15:05:26.673739344 +0000 UTC m=+1085.317753580" lastFinishedPulling="2026-03-12 15:05:34.412012792 +0000 UTC m=+1093.056027028" observedRunningTime="2026-03-12 15:06:12.681830916 +0000 UTC m=+1131.325845162" watchObservedRunningTime="2026-03-12 15:06:12.686923884 +0000 UTC m=+1131.330938110" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.717748 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.521561464 podStartE2EDuration="59.717726507s" podCreationTimestamp="2026-03-12 15:05:13 +0000 UTC" firstStartedPulling="2026-03-12 15:05:26.952626139 +0000 UTC m=+1085.596640365" lastFinishedPulling="2026-03-12 15:05:34.148791192 +0000 UTC m=+1092.792805408" observedRunningTime="2026-03-12 15:06:12.70989408 +0000 UTC m=+1131.353908306" watchObservedRunningTime="2026-03-12 15:06:12.717726507 +0000 UTC m=+1131.361740733" Mar 12 15:06:12 crc kubenswrapper[4832]: I0312 15:06:12.856148 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rccst"] Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.000010 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.064435 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.080845 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6x8t6-config-9bq54"] Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.666716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w2psf" event={"ID":"98264f95-8803-42bb-b985-359a3556a90c","Type":"ContainerStarted","Data":"b0d26359005d78b681abb9ff6bbc13cf71ec617e38c46b82e216ebdc093d29bd"} Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.667888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"6f671985eb3c408b3fe0ef56abc40829956ab8860d2a20bd4e18d053144f808b"} Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.669642 4832 generic.go:334] "Generic (PLEG): container finished" podID="31099f81-185e-4771-b5a9-978d09fac47a" containerID="623e5e290fe496c1e6164230ffeb2082353bd4876b6ba28a04c0485ad1005ed5" exitCode=0 Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.669710 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rccst" event={"ID":"31099f81-185e-4771-b5a9-978d09fac47a","Type":"ContainerDied","Data":"623e5e290fe496c1e6164230ffeb2082353bd4876b6ba28a04c0485ad1005ed5"} Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.669736 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rccst" event={"ID":"31099f81-185e-4771-b5a9-978d09fac47a","Type":"ContainerStarted","Data":"c23116001b4b195466986e27ac7aab9e5fbc2b336a99b2ac78a8f673c26ca98b"} Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.671292 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6x8t6-config-9bq54" event={"ID":"15106abf-828a-4911-ba82-649cb3eadb5c","Type":"ContainerStarted","Data":"7e814d3115cf54d45555366047d5d7108a4593019e91e6bfaf032cd87c783bcc"} Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.671343 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6x8t6-config-9bq54" event={"ID":"15106abf-828a-4911-ba82-649cb3eadb5c","Type":"ContainerStarted","Data":"aa9a4909f80f9a26ca33259b16e959ce560edf775036361c87787f844097fa2f"} Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.690730 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-w2psf" podStartSLOduration=3.707927248 podStartE2EDuration="15.690708531s" podCreationTimestamp="2026-03-12 15:05:58 +0000 UTC" firstStartedPulling="2026-03-12 15:06:00.250048848 +0000 UTC m=+1118.894063074" lastFinishedPulling="2026-03-12 15:06:12.232830131 +0000 UTC m=+1130.876844357" observedRunningTime="2026-03-12 15:06:13.687373415 +0000 UTC m=+1132.331387641" watchObservedRunningTime="2026-03-12 15:06:13.690708531 +0000 UTC m=+1132.334722757" Mar 12 15:06:13 crc kubenswrapper[4832]: I0312 15:06:13.723211 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6x8t6-config-9bq54" podStartSLOduration=4.723195614 podStartE2EDuration="4.723195614s" podCreationTimestamp="2026-03-12 15:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:13.719350902 +0000 UTC m=+1132.363365148" watchObservedRunningTime="2026-03-12 15:06:13.723195614 +0000 UTC m=+1132.367209840" Mar 12 15:06:14 crc kubenswrapper[4832]: I0312 15:06:14.196327 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6x8t6" Mar 12 15:06:14 crc kubenswrapper[4832]: I0312 15:06:14.682949 4832 generic.go:334] "Generic (PLEG): container finished" podID="15106abf-828a-4911-ba82-649cb3eadb5c" containerID="7e814d3115cf54d45555366047d5d7108a4593019e91e6bfaf032cd87c783bcc" exitCode=0 Mar 12 15:06:14 crc kubenswrapper[4832]: I0312 15:06:14.683054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6x8t6-config-9bq54" event={"ID":"15106abf-828a-4911-ba82-649cb3eadb5c","Type":"ContainerDied","Data":"7e814d3115cf54d45555366047d5d7108a4593019e91e6bfaf032cd87c783bcc"} Mar 12 15:06:14 crc kubenswrapper[4832]: I0312 15:06:14.998752 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rccst" Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.035838 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxxqw\" (UniqueName: \"kubernetes.io/projected/31099f81-185e-4771-b5a9-978d09fac47a-kube-api-access-jxxqw\") pod \"31099f81-185e-4771-b5a9-978d09fac47a\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.036040 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31099f81-185e-4771-b5a9-978d09fac47a-operator-scripts\") pod \"31099f81-185e-4771-b5a9-978d09fac47a\" (UID: \"31099f81-185e-4771-b5a9-978d09fac47a\") " Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.036804 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31099f81-185e-4771-b5a9-978d09fac47a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31099f81-185e-4771-b5a9-978d09fac47a" (UID: "31099f81-185e-4771-b5a9-978d09fac47a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.039685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31099f81-185e-4771-b5a9-978d09fac47a-kube-api-access-jxxqw" (OuterVolumeSpecName: "kube-api-access-jxxqw") pod "31099f81-185e-4771-b5a9-978d09fac47a" (UID: "31099f81-185e-4771-b5a9-978d09fac47a"). InnerVolumeSpecName "kube-api-access-jxxqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.137428 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxxqw\" (UniqueName: \"kubernetes.io/projected/31099f81-185e-4771-b5a9-978d09fac47a-kube-api-access-jxxqw\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.137458 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31099f81-185e-4771-b5a9-978d09fac47a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.691645 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"395169a8af1975157e3ef8500401e49c67c5d27048be86c4e994a0aec4184cc3"} Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.691685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"ba1b9960066e0a3cdda3bb492f74f0b99e332860417c7ac56c9d43c5fa90ea38"} Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.691696 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"fb5e198c1e8f773e080e57f8cffc88c05d09c4cf39cb6201760e539ba4895632"} Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.691704 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"f77dc877c57b591814680e0a912f5f0017eb623132fdd6e4a5a999f15500bd9c"} Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.692943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rccst" event={"ID":"31099f81-185e-4771-b5a9-978d09fac47a","Type":"ContainerDied","Data":"c23116001b4b195466986e27ac7aab9e5fbc2b336a99b2ac78a8f673c26ca98b"} Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.692967 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23116001b4b195466986e27ac7aab9e5fbc2b336a99b2ac78a8f673c26ca98b" Mar 12 15:06:15 crc kubenswrapper[4832]: I0312 15:06:15.693081 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rccst" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.070585 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154226 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmrfj\" (UniqueName: \"kubernetes.io/projected/15106abf-828a-4911-ba82-649cb3eadb5c-kube-api-access-tmrfj\") pod \"15106abf-828a-4911-ba82-649cb3eadb5c\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154332 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run-ovn\") pod \"15106abf-828a-4911-ba82-649cb3eadb5c\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-additional-scripts\") pod \"15106abf-828a-4911-ba82-649cb3eadb5c\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154398 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run\") pod \"15106abf-828a-4911-ba82-649cb3eadb5c\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-log-ovn\") pod \"15106abf-828a-4911-ba82-649cb3eadb5c\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-scripts\") pod \"15106abf-828a-4911-ba82-649cb3eadb5c\" (UID: \"15106abf-828a-4911-ba82-649cb3eadb5c\") " Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154886 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run" (OuterVolumeSpecName: "var-run") pod "15106abf-828a-4911-ba82-649cb3eadb5c" (UID: "15106abf-828a-4911-ba82-649cb3eadb5c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154954 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "15106abf-828a-4911-ba82-649cb3eadb5c" (UID: "15106abf-828a-4911-ba82-649cb3eadb5c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.154972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "15106abf-828a-4911-ba82-649cb3eadb5c" (UID: "15106abf-828a-4911-ba82-649cb3eadb5c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.155726 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-scripts" (OuterVolumeSpecName: "scripts") pod "15106abf-828a-4911-ba82-649cb3eadb5c" (UID: "15106abf-828a-4911-ba82-649cb3eadb5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.156794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "15106abf-828a-4911-ba82-649cb3eadb5c" (UID: "15106abf-828a-4911-ba82-649cb3eadb5c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.158550 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15106abf-828a-4911-ba82-649cb3eadb5c-kube-api-access-tmrfj" (OuterVolumeSpecName: "kube-api-access-tmrfj") pod "15106abf-828a-4911-ba82-649cb3eadb5c" (UID: "15106abf-828a-4911-ba82-649cb3eadb5c"). InnerVolumeSpecName "kube-api-access-tmrfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.200423 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6x8t6-config-9bq54"] Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.207958 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6x8t6-config-9bq54"] Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.256026 4832 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.256061 4832 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.256073 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.256085 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmrfj\" (UniqueName: \"kubernetes.io/projected/15106abf-828a-4911-ba82-649cb3eadb5c-kube-api-access-tmrfj\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.256096 4832 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/15106abf-828a-4911-ba82-649cb3eadb5c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.256109 4832 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/15106abf-828a-4911-ba82-649cb3eadb5c-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.636912 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15106abf-828a-4911-ba82-649cb3eadb5c" path="/var/lib/kubelet/pods/15106abf-828a-4911-ba82-649cb3eadb5c/volumes" Mar 12 15:06:16 crc kubenswrapper[4832]: E0312 15:06:16.703071 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15106abf_828a_4911_ba82_649cb3eadb5c.slice/crio-aa9a4909f80f9a26ca33259b16e959ce560edf775036361c87787f844097fa2f\": RecentStats: unable to find data in memory cache]" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.703761 4832 scope.go:117] "RemoveContainer" containerID="7e814d3115cf54d45555366047d5d7108a4593019e91e6bfaf032cd87c783bcc" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.703854 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6x8t6-config-9bq54" Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.771268 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rccst"] Mar 12 15:06:16 crc kubenswrapper[4832]: I0312 15:06:16.788168 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rccst"] Mar 12 15:06:17 crc kubenswrapper[4832]: I0312 15:06:17.716201 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"6a8484293b6ff76641e036f895c1f9793104cfa6ddc39c3cc62c57e633ac2dad"} Mar 12 15:06:17 crc kubenswrapper[4832]: I0312 15:06:17.716561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"0a3a1f656bb9305d4abf24aace9075168f3474fc1bf155730ec007321ee118f0"} Mar 12 15:06:17 crc kubenswrapper[4832]: I0312 15:06:17.716579 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"7934a1d04f42cdcf09ef2d43769012198f775d5149f2cd88c69a3b82f7a01808"} Mar 12 15:06:17 crc kubenswrapper[4832]: I0312 15:06:17.716591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"10fdb8f1e0d7fced22c676ead3eec590d1b1436dfb88102b3a868111b8cf510b"} Mar 12 15:06:18 crc kubenswrapper[4832]: I0312 15:06:18.633068 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31099f81-185e-4771-b5a9-978d09fac47a" path="/var/lib/kubelet/pods/31099f81-185e-4771-b5a9-978d09fac47a/volumes" Mar 12 15:06:19 crc kubenswrapper[4832]: I0312 15:06:19.736474 4832 generic.go:334] "Generic (PLEG): container finished" podID="98264f95-8803-42bb-b985-359a3556a90c" containerID="b0d26359005d78b681abb9ff6bbc13cf71ec617e38c46b82e216ebdc093d29bd" exitCode=0 Mar 12 15:06:19 crc kubenswrapper[4832]: I0312 15:06:19.736541 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w2psf" event={"ID":"98264f95-8803-42bb-b985-359a3556a90c","Type":"ContainerDied","Data":"b0d26359005d78b681abb9ff6bbc13cf71ec617e38c46b82e216ebdc093d29bd"} Mar 12 15:06:19 crc kubenswrapper[4832]: I0312 15:06:19.742292 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"4e6b191abf9728005346980cf82e8b36441c4973f3adcef4862b95a336e8f4de"} Mar 12 15:06:19 crc kubenswrapper[4832]: I0312 15:06:19.742339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"648fc05e7d7e0cac66568335d6088c659dd889eb695a43746bb49bcc2833f722"} Mar 12 15:06:19 crc kubenswrapper[4832]: I0312 15:06:19.742356 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"a9efa828f5dde963e39166bef033c6cacc22fe076c1f2e9cff03405c44a9e8ba"} Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.474375 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vgdpl"] Mar 12 15:06:20 crc kubenswrapper[4832]: E0312 15:06:20.474992 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02632af9-ed7f-4481-865f-698db662e6fe" containerName="swift-ring-rebalance" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.475011 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="02632af9-ed7f-4481-865f-698db662e6fe" containerName="swift-ring-rebalance" Mar 12 15:06:20 crc kubenswrapper[4832]: E0312 15:06:20.475020 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31099f81-185e-4771-b5a9-978d09fac47a" containerName="mariadb-account-create-update" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.475027 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="31099f81-185e-4771-b5a9-978d09fac47a" containerName="mariadb-account-create-update" Mar 12 15:06:20 crc kubenswrapper[4832]: E0312 15:06:20.475038 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15106abf-828a-4911-ba82-649cb3eadb5c" containerName="ovn-config" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.475044 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="15106abf-828a-4911-ba82-649cb3eadb5c" containerName="ovn-config" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.475185 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="15106abf-828a-4911-ba82-649cb3eadb5c" containerName="ovn-config" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.475205 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="02632af9-ed7f-4481-865f-698db662e6fe" containerName="swift-ring-rebalance" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.475219 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="31099f81-185e-4771-b5a9-978d09fac47a" containerName="mariadb-account-create-update" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.475703 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.483535 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.488006 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vgdpl"] Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.522848 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp7ts\" (UniqueName: \"kubernetes.io/projected/2828d553-f62d-41a5-bcf1-0c4857572aae-kube-api-access-jp7ts\") pod \"root-account-create-update-vgdpl\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.523087 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d553-f62d-41a5-bcf1-0c4857572aae-operator-scripts\") pod \"root-account-create-update-vgdpl\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.624283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d553-f62d-41a5-bcf1-0c4857572aae-operator-scripts\") pod \"root-account-create-update-vgdpl\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.624351 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp7ts\" (UniqueName: \"kubernetes.io/projected/2828d553-f62d-41a5-bcf1-0c4857572aae-kube-api-access-jp7ts\") pod \"root-account-create-update-vgdpl\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.625227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d553-f62d-41a5-bcf1-0c4857572aae-operator-scripts\") pod \"root-account-create-update-vgdpl\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.640139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp7ts\" (UniqueName: \"kubernetes.io/projected/2828d553-f62d-41a5-bcf1-0c4857572aae-kube-api-access-jp7ts\") pod \"root-account-create-update-vgdpl\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.757839 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"7ce9340ff9aea1340485c66f8f546032108f364d3ac48cae820da0d7f109d721"} Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.757895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"967573be4953d5912a93a825b45fa5c9cf46b7c0d06006e5eb0efdcc389c8b92"} Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.757914 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"d4a5b0b3e226071f5ebab55d8896b14740f92f05226ec104671415f90abfe8e1"} Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.757931 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2fcebd5-a8cd-4290-9055-e0a7bbec2854","Type":"ContainerStarted","Data":"8ffb03c2148977d1c880268de4e595bf6733a2659c43bdf34d3e0a16994256ca"} Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.813034 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=24.843330336 podStartE2EDuration="30.813013168s" podCreationTimestamp="2026-03-12 15:05:50 +0000 UTC" firstStartedPulling="2026-03-12 15:06:13.063676342 +0000 UTC m=+1131.707690568" lastFinishedPulling="2026-03-12 15:06:19.033359174 +0000 UTC m=+1137.677373400" observedRunningTime="2026-03-12 15:06:20.799911338 +0000 UTC m=+1139.443925584" watchObservedRunningTime="2026-03-12 15:06:20.813013168 +0000 UTC m=+1139.457027414" Mar 12 15:06:20 crc kubenswrapper[4832]: I0312 15:06:20.834528 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.131955 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-nt2jb"] Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.134797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.148160 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.148256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-nt2jb"] Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.233752 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.233799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbp2b\" (UniqueName: \"kubernetes.io/projected/b2cf4887-c8d9-4166-b63d-62524092bde4-kube-api-access-lbp2b\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.233844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.234027 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-config\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.234061 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.234107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.330114 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vgdpl"] Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.335550 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-config\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.335616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.335684 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.335729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.335755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbp2b\" (UniqueName: \"kubernetes.io/projected/b2cf4887-c8d9-4166-b63d-62524092bde4-kube-api-access-lbp2b\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.335802 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.336426 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-config\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.336634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.336722 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.336735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.337498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.355585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbp2b\" (UniqueName: \"kubernetes.io/projected/b2cf4887-c8d9-4166-b63d-62524092bde4-kube-api-access-lbp2b\") pod \"dnsmasq-dns-5c79d794d7-nt2jb\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.451535 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.765766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vgdpl" event={"ID":"2828d553-f62d-41a5-bcf1-0c4857572aae","Type":"ContainerStarted","Data":"a76b4353436bcc787f20f3a094c4f8c6bb17ae0f808781d031c2a091bf84b611"} Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.766032 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vgdpl" event={"ID":"2828d553-f62d-41a5-bcf1-0c4857572aae","Type":"ContainerStarted","Data":"ec09876bbbfe1f125aaab9f3c2f916941f2856d9f68cb6e2c4fc56adce41c3c9"} Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.778764 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vgdpl" podStartSLOduration=1.7787481330000001 podStartE2EDuration="1.778748133s" podCreationTimestamp="2026-03-12 15:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:21.777662372 +0000 UTC m=+1140.421676598" watchObservedRunningTime="2026-03-12 15:06:21.778748133 +0000 UTC m=+1140.422762359" Mar 12 15:06:21 crc kubenswrapper[4832]: I0312 15:06:21.888027 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-nt2jb"] Mar 12 15:06:21 crc kubenswrapper[4832]: W0312 15:06:21.900599 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2cf4887_c8d9_4166_b63d_62524092bde4.slice/crio-6c5c5f5ad0f954993290a443e584409a35c2b6ef6203e91dee220fba97396e02 WatchSource:0}: Error finding container 6c5c5f5ad0f954993290a443e584409a35c2b6ef6203e91dee220fba97396e02: Status 404 returned error can't find the container with id 6c5c5f5ad0f954993290a443e584409a35c2b6ef6203e91dee220fba97396e02 Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.499523 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w2psf" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.658155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-db-sync-config-data\") pod \"98264f95-8803-42bb-b985-359a3556a90c\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.658498 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnfj8\" (UniqueName: \"kubernetes.io/projected/98264f95-8803-42bb-b985-359a3556a90c-kube-api-access-bnfj8\") pod \"98264f95-8803-42bb-b985-359a3556a90c\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.658570 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-config-data\") pod \"98264f95-8803-42bb-b985-359a3556a90c\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.658609 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-combined-ca-bundle\") pod \"98264f95-8803-42bb-b985-359a3556a90c\" (UID: \"98264f95-8803-42bb-b985-359a3556a90c\") " Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.662159 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98264f95-8803-42bb-b985-359a3556a90c-kube-api-access-bnfj8" (OuterVolumeSpecName: "kube-api-access-bnfj8") pod "98264f95-8803-42bb-b985-359a3556a90c" (UID: "98264f95-8803-42bb-b985-359a3556a90c"). InnerVolumeSpecName "kube-api-access-bnfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.662248 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "98264f95-8803-42bb-b985-359a3556a90c" (UID: "98264f95-8803-42bb-b985-359a3556a90c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.678446 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98264f95-8803-42bb-b985-359a3556a90c" (UID: "98264f95-8803-42bb-b985-359a3556a90c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.704322 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-config-data" (OuterVolumeSpecName: "config-data") pod "98264f95-8803-42bb-b985-359a3556a90c" (UID: "98264f95-8803-42bb-b985-359a3556a90c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.761281 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.761322 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.761334 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnfj8\" (UniqueName: \"kubernetes.io/projected/98264f95-8803-42bb-b985-359a3556a90c-kube-api-access-bnfj8\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.761348 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98264f95-8803-42bb-b985-359a3556a90c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.775537 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w2psf" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.775433 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w2psf" event={"ID":"98264f95-8803-42bb-b985-359a3556a90c","Type":"ContainerDied","Data":"ea836f7dec9e2512766fad16fb7ea5c9a3a183653494b191e2f4c11b3a8add9e"} Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.775799 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea836f7dec9e2512766fad16fb7ea5c9a3a183653494b191e2f4c11b3a8add9e" Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.777281 4832 generic.go:334] "Generic (PLEG): container finished" podID="2828d553-f62d-41a5-bcf1-0c4857572aae" containerID="a76b4353436bcc787f20f3a094c4f8c6bb17ae0f808781d031c2a091bf84b611" exitCode=0 Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.777332 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vgdpl" event={"ID":"2828d553-f62d-41a5-bcf1-0c4857572aae","Type":"ContainerDied","Data":"a76b4353436bcc787f20f3a094c4f8c6bb17ae0f808781d031c2a091bf84b611"} Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.792450 4832 generic.go:334] "Generic (PLEG): container finished" podID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerID="0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1" exitCode=0 Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.792518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" event={"ID":"b2cf4887-c8d9-4166-b63d-62524092bde4","Type":"ContainerDied","Data":"0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1"} Mar 12 15:06:22 crc kubenswrapper[4832]: I0312 15:06:22.792550 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" event={"ID":"b2cf4887-c8d9-4166-b63d-62524092bde4","Type":"ContainerStarted","Data":"6c5c5f5ad0f954993290a443e584409a35c2b6ef6203e91dee220fba97396e02"} Mar 12 15:06:23 crc kubenswrapper[4832]: I0312 15:06:23.806626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" event={"ID":"b2cf4887-c8d9-4166-b63d-62524092bde4","Type":"ContainerStarted","Data":"39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc"} Mar 12 15:06:23 crc kubenswrapper[4832]: I0312 15:06:23.807018 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.072655 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" podStartSLOduration=3.072638625 podStartE2EDuration="3.072638625s" podCreationTimestamp="2026-03-12 15:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:23.837472233 +0000 UTC m=+1142.481486459" watchObservedRunningTime="2026-03-12 15:06:24.072638625 +0000 UTC m=+1142.716652851" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.077233 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-nt2jb"] Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.127958 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-bqv5j"] Mar 12 15:06:24 crc kubenswrapper[4832]: E0312 15:06:24.133744 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98264f95-8803-42bb-b985-359a3556a90c" containerName="glance-db-sync" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.133786 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="98264f95-8803-42bb-b985-359a3556a90c" containerName="glance-db-sync" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.134102 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="98264f95-8803-42bb-b985-359a3556a90c" containerName="glance-db-sync" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.135186 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.141840 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-bqv5j"] Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.294848 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.294907 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.294937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-config\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.295049 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.295182 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x7w6\" (UniqueName: \"kubernetes.io/projected/9fabd59a-046c-4afa-b884-f5a83cc91a53-kube-api-access-4x7w6\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.295274 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.330537 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.399487 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x7w6\" (UniqueName: \"kubernetes.io/projected/9fabd59a-046c-4afa-b884-f5a83cc91a53-kube-api-access-4x7w6\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.399908 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.400062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.400099 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.400128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-config\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.400155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.401171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.401803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.402410 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.403104 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.403732 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-config\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.419702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x7w6\" (UniqueName: \"kubernetes.io/projected/9fabd59a-046c-4afa-b884-f5a83cc91a53-kube-api-access-4x7w6\") pod \"dnsmasq-dns-5f59b8f679-bqv5j\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.470942 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.500835 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp7ts\" (UniqueName: \"kubernetes.io/projected/2828d553-f62d-41a5-bcf1-0c4857572aae-kube-api-access-jp7ts\") pod \"2828d553-f62d-41a5-bcf1-0c4857572aae\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.500954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d553-f62d-41a5-bcf1-0c4857572aae-operator-scripts\") pod \"2828d553-f62d-41a5-bcf1-0c4857572aae\" (UID: \"2828d553-f62d-41a5-bcf1-0c4857572aae\") " Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.501639 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2828d553-f62d-41a5-bcf1-0c4857572aae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2828d553-f62d-41a5-bcf1-0c4857572aae" (UID: "2828d553-f62d-41a5-bcf1-0c4857572aae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.505847 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2828d553-f62d-41a5-bcf1-0c4857572aae-kube-api-access-jp7ts" (OuterVolumeSpecName: "kube-api-access-jp7ts") pod "2828d553-f62d-41a5-bcf1-0c4857572aae" (UID: "2828d553-f62d-41a5-bcf1-0c4857572aae"). InnerVolumeSpecName "kube-api-access-jp7ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.603468 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp7ts\" (UniqueName: \"kubernetes.io/projected/2828d553-f62d-41a5-bcf1-0c4857572aae-kube-api-access-jp7ts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.603542 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d553-f62d-41a5-bcf1-0c4857572aae-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.813976 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vgdpl" event={"ID":"2828d553-f62d-41a5-bcf1-0c4857572aae","Type":"ContainerDied","Data":"ec09876bbbfe1f125aaab9f3c2f916941f2856d9f68cb6e2c4fc56adce41c3c9"} Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.814026 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec09876bbbfe1f125aaab9f3c2f916941f2856d9f68cb6e2c4fc56adce41c3c9" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.814002 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vgdpl" Mar 12 15:06:24 crc kubenswrapper[4832]: I0312 15:06:24.919311 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-bqv5j"] Mar 12 15:06:25 crc kubenswrapper[4832]: I0312 15:06:25.192755 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:06:25 crc kubenswrapper[4832]: I0312 15:06:25.532683 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 15:06:25 crc kubenswrapper[4832]: I0312 15:06:25.822612 4832 generic.go:334] "Generic (PLEG): container finished" podID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerID="e3c5af13b0e15840d0ce592aee560451f86670a0fc2563dc381e8149c98d58fb" exitCode=0 Mar 12 15:06:25 crc kubenswrapper[4832]: I0312 15:06:25.822794 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" podUID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerName="dnsmasq-dns" containerID="cri-o://39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc" gracePeriod=10 Mar 12 15:06:25 crc kubenswrapper[4832]: I0312 15:06:25.823477 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" event={"ID":"9fabd59a-046c-4afa-b884-f5a83cc91a53","Type":"ContainerDied","Data":"e3c5af13b0e15840d0ce592aee560451f86670a0fc2563dc381e8149c98d58fb"} Mar 12 15:06:25 crc kubenswrapper[4832]: I0312 15:06:25.823522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" event={"ID":"9fabd59a-046c-4afa-b884-f5a83cc91a53","Type":"ContainerStarted","Data":"1fd47d324368c6398b3c2f572e00180eb8278b4779945486d96d83ea2825de1a"} Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.198066 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.326597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-nb\") pod \"b2cf4887-c8d9-4166-b63d-62524092bde4\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.326674 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-swift-storage-0\") pod \"b2cf4887-c8d9-4166-b63d-62524092bde4\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.326716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-config\") pod \"b2cf4887-c8d9-4166-b63d-62524092bde4\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.326743 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbp2b\" (UniqueName: \"kubernetes.io/projected/b2cf4887-c8d9-4166-b63d-62524092bde4-kube-api-access-lbp2b\") pod \"b2cf4887-c8d9-4166-b63d-62524092bde4\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.326800 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-svc\") pod \"b2cf4887-c8d9-4166-b63d-62524092bde4\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.326921 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-sb\") pod \"b2cf4887-c8d9-4166-b63d-62524092bde4\" (UID: \"b2cf4887-c8d9-4166-b63d-62524092bde4\") " Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.334675 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2cf4887-c8d9-4166-b63d-62524092bde4-kube-api-access-lbp2b" (OuterVolumeSpecName: "kube-api-access-lbp2b") pod "b2cf4887-c8d9-4166-b63d-62524092bde4" (UID: "b2cf4887-c8d9-4166-b63d-62524092bde4"). InnerVolumeSpecName "kube-api-access-lbp2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.366935 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-config" (OuterVolumeSpecName: "config") pod "b2cf4887-c8d9-4166-b63d-62524092bde4" (UID: "b2cf4887-c8d9-4166-b63d-62524092bde4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.368446 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2cf4887-c8d9-4166-b63d-62524092bde4" (UID: "b2cf4887-c8d9-4166-b63d-62524092bde4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.368568 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2cf4887-c8d9-4166-b63d-62524092bde4" (UID: "b2cf4887-c8d9-4166-b63d-62524092bde4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.390392 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2cf4887-c8d9-4166-b63d-62524092bde4" (UID: "b2cf4887-c8d9-4166-b63d-62524092bde4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.396880 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b2cf4887-c8d9-4166-b63d-62524092bde4" (UID: "b2cf4887-c8d9-4166-b63d-62524092bde4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.428522 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.428570 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.428588 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.428611 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbp2b\" (UniqueName: \"kubernetes.io/projected/b2cf4887-c8d9-4166-b63d-62524092bde4-kube-api-access-lbp2b\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.428628 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.428643 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2cf4887-c8d9-4166-b63d-62524092bde4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.850758 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vgdpl"] Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.886739 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vgdpl"] Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.887757 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" event={"ID":"9fabd59a-046c-4afa-b884-f5a83cc91a53","Type":"ContainerStarted","Data":"8bbdd46e82c0779df5f7b1241f16735d46ff482acb3908d9c1f5bde24323ffa6"} Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.888757 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.924135 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" podStartSLOduration=2.924115452 podStartE2EDuration="2.924115452s" podCreationTimestamp="2026-03-12 15:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:26.917312995 +0000 UTC m=+1145.561327221" watchObservedRunningTime="2026-03-12 15:06:26.924115452 +0000 UTC m=+1145.568129678" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.934195 4832 generic.go:334] "Generic (PLEG): container finished" podID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerID="39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc" exitCode=0 Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.934478 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" event={"ID":"b2cf4887-c8d9-4166-b63d-62524092bde4","Type":"ContainerDied","Data":"39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc"} Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.934595 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" event={"ID":"b2cf4887-c8d9-4166-b63d-62524092bde4","Type":"ContainerDied","Data":"6c5c5f5ad0f954993290a443e584409a35c2b6ef6203e91dee220fba97396e02"} Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.934880 4832 scope.go:117] "RemoveContainer" containerID="39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc" Mar 12 15:06:26 crc kubenswrapper[4832]: I0312 15:06:26.935089 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-nt2jb" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.034586 4832 scope.go:117] "RemoveContainer" containerID="0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.034665 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-nt2jb"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.038795 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-nt2jb"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.096726 4832 scope.go:117] "RemoveContainer" containerID="39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc" Mar 12 15:06:27 crc kubenswrapper[4832]: E0312 15:06:27.097955 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc\": container with ID starting with 39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc not found: ID does not exist" containerID="39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.097987 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc"} err="failed to get container status \"39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc\": rpc error: code = NotFound desc = could not find container \"39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc\": container with ID starting with 39a7d82a3f56cb198c6a6309ca789b0f15935be087411679d503efb1f19a5dbc not found: ID does not exist" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.098007 4832 scope.go:117] "RemoveContainer" containerID="0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1" Mar 12 15:06:27 crc kubenswrapper[4832]: E0312 15:06:27.100117 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1\": container with ID starting with 0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1 not found: ID does not exist" containerID="0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.100153 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1"} err="failed to get container status \"0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1\": rpc error: code = NotFound desc = could not find container \"0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1\": container with ID starting with 0598256ec77ae2160e2de69c6cc96646245a5289b0dfad78f4e8bd42b38632a1 not found: ID does not exist" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.147099 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-slglh"] Mar 12 15:06:27 crc kubenswrapper[4832]: E0312 15:06:27.148115 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerName="init" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.148145 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerName="init" Mar 12 15:06:27 crc kubenswrapper[4832]: E0312 15:06:27.148181 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerName="dnsmasq-dns" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.148190 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerName="dnsmasq-dns" Mar 12 15:06:27 crc kubenswrapper[4832]: E0312 15:06:27.148214 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2828d553-f62d-41a5-bcf1-0c4857572aae" containerName="mariadb-account-create-update" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.148222 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2828d553-f62d-41a5-bcf1-0c4857572aae" containerName="mariadb-account-create-update" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.148413 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2cf4887-c8d9-4166-b63d-62524092bde4" containerName="dnsmasq-dns" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.148438 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2828d553-f62d-41a5-bcf1-0c4857572aae" containerName="mariadb-account-create-update" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.148997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.154568 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-slglh"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.259108 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b608-account-create-update-6gg2k"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.260124 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.262527 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a71c99-f30a-4882-94b5-fb73111f41c2-operator-scripts\") pod \"cinder-db-create-slglh\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.262604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjm6n\" (UniqueName: \"kubernetes.io/projected/a2a71c99-f30a-4882-94b5-fb73111f41c2-kube-api-access-vjm6n\") pod \"cinder-db-create-slglh\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.263143 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.265415 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fdtv9"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.266930 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.277078 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b608-account-create-update-6gg2k"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.286519 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fdtv9"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.345274 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-j7gh8"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.346217 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.357232 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j7gh8"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.364638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfrl\" (UniqueName: \"kubernetes.io/projected/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-kube-api-access-ddfrl\") pod \"neutron-db-create-fdtv9\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.364691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53942e7f-8c9b-4762-8f00-1f382fa40da8-operator-scripts\") pod \"cinder-b608-account-create-update-6gg2k\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.364729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjm6n\" (UniqueName: \"kubernetes.io/projected/a2a71c99-f30a-4882-94b5-fb73111f41c2-kube-api-access-vjm6n\") pod \"cinder-db-create-slglh\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.364825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a71c99-f30a-4882-94b5-fb73111f41c2-operator-scripts\") pod \"cinder-db-create-slglh\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.364850 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-operator-scripts\") pod \"neutron-db-create-fdtv9\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.364870 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwrnk\" (UniqueName: \"kubernetes.io/projected/53942e7f-8c9b-4762-8f00-1f382fa40da8-kube-api-access-cwrnk\") pod \"cinder-b608-account-create-update-6gg2k\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.365686 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a71c99-f30a-4882-94b5-fb73111f41c2-operator-scripts\") pod \"cinder-db-create-slglh\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.385366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjm6n\" (UniqueName: \"kubernetes.io/projected/a2a71c99-f30a-4882-94b5-fb73111f41c2-kube-api-access-vjm6n\") pod \"cinder-db-create-slglh\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.442773 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8694-account-create-update-dcf4c"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.443694 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.445708 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.457494 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8694-account-create-update-dcf4c"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.466471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-operator-scripts\") pod \"barbican-db-create-j7gh8\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.466581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-operator-scripts\") pod \"neutron-db-create-fdtv9\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.466616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwrnk\" (UniqueName: \"kubernetes.io/projected/53942e7f-8c9b-4762-8f00-1f382fa40da8-kube-api-access-cwrnk\") pod \"cinder-b608-account-create-update-6gg2k\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.466667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddfrl\" (UniqueName: \"kubernetes.io/projected/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-kube-api-access-ddfrl\") pod \"neutron-db-create-fdtv9\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.466706 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53942e7f-8c9b-4762-8f00-1f382fa40da8-operator-scripts\") pod \"cinder-b608-account-create-update-6gg2k\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.466728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxcnk\" (UniqueName: \"kubernetes.io/projected/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-kube-api-access-gxcnk\") pod \"barbican-db-create-j7gh8\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.467333 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-operator-scripts\") pod \"neutron-db-create-fdtv9\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.467648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53942e7f-8c9b-4762-8f00-1f382fa40da8-operator-scripts\") pod \"cinder-b608-account-create-update-6gg2k\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.470915 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-slglh" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.485398 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddfrl\" (UniqueName: \"kubernetes.io/projected/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-kube-api-access-ddfrl\") pod \"neutron-db-create-fdtv9\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.487787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwrnk\" (UniqueName: \"kubernetes.io/projected/53942e7f-8c9b-4762-8f00-1f382fa40da8-kube-api-access-cwrnk\") pod \"cinder-b608-account-create-update-6gg2k\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.522114 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2pdk5"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.525682 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.527853 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gszf" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.530587 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.530982 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.531152 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.544705 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2pdk5"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.568286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6sk\" (UniqueName: \"kubernetes.io/projected/0504dba2-4849-450f-b10c-9669184c2820-kube-api-access-9n6sk\") pod \"neutron-8694-account-create-update-dcf4c\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.568338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504dba2-4849-450f-b10c-9669184c2820-operator-scripts\") pod \"neutron-8694-account-create-update-dcf4c\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.568374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxcnk\" (UniqueName: \"kubernetes.io/projected/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-kube-api-access-gxcnk\") pod \"barbican-db-create-j7gh8\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.568449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-operator-scripts\") pod \"barbican-db-create-j7gh8\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.569134 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-operator-scripts\") pod \"barbican-db-create-j7gh8\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.576228 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.583690 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.591372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxcnk\" (UniqueName: \"kubernetes.io/projected/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-kube-api-access-gxcnk\") pod \"barbican-db-create-j7gh8\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.658864 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.664453 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-00a7-account-create-update-kzbhm"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.665348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.667125 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.669278 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-config-data\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.669303 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl26n\" (UniqueName: \"kubernetes.io/projected/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-kube-api-access-kl26n\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.669395 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-combined-ca-bundle\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.669441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6sk\" (UniqueName: \"kubernetes.io/projected/0504dba2-4849-450f-b10c-9669184c2820-kube-api-access-9n6sk\") pod \"neutron-8694-account-create-update-dcf4c\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.669461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504dba2-4849-450f-b10c-9669184c2820-operator-scripts\") pod \"neutron-8694-account-create-update-dcf4c\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.670193 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504dba2-4849-450f-b10c-9669184c2820-operator-scripts\") pod \"neutron-8694-account-create-update-dcf4c\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.683167 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-00a7-account-create-update-kzbhm"] Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.688239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6sk\" (UniqueName: \"kubernetes.io/projected/0504dba2-4849-450f-b10c-9669184c2820-kube-api-access-9n6sk\") pod \"neutron-8694-account-create-update-dcf4c\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.758301 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.770760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be99adb-61ba-476f-94ad-7e6015445091-operator-scripts\") pod \"barbican-00a7-account-create-update-kzbhm\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.770834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9nx\" (UniqueName: \"kubernetes.io/projected/7be99adb-61ba-476f-94ad-7e6015445091-kube-api-access-lv9nx\") pod \"barbican-00a7-account-create-update-kzbhm\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.770875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-combined-ca-bundle\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.770935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-config-data\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.770952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl26n\" (UniqueName: \"kubernetes.io/projected/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-kube-api-access-kl26n\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.777840 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-combined-ca-bundle\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.781336 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-config-data\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.792120 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl26n\" (UniqueName: \"kubernetes.io/projected/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-kube-api-access-kl26n\") pod \"keystone-db-sync-2pdk5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.870711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.872702 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv9nx\" (UniqueName: \"kubernetes.io/projected/7be99adb-61ba-476f-94ad-7e6015445091-kube-api-access-lv9nx\") pod \"barbican-00a7-account-create-update-kzbhm\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.872958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be99adb-61ba-476f-94ad-7e6015445091-operator-scripts\") pod \"barbican-00a7-account-create-update-kzbhm\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.874051 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be99adb-61ba-476f-94ad-7e6015445091-operator-scripts\") pod \"barbican-00a7-account-create-update-kzbhm\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:27 crc kubenswrapper[4832]: I0312 15:06:27.895448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv9nx\" (UniqueName: \"kubernetes.io/projected/7be99adb-61ba-476f-94ad-7e6015445091-kube-api-access-lv9nx\") pod \"barbican-00a7-account-create-update-kzbhm\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.001781 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.089927 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-slglh"] Mar 12 15:06:28 crc kubenswrapper[4832]: W0312 15:06:28.109757 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2a71c99_f30a_4882_94b5_fb73111f41c2.slice/crio-8a0dec8ab35e2ba92d6a960ba31ebc29c740657dceaa12c0a11c482162ab5072 WatchSource:0}: Error finding container 8a0dec8ab35e2ba92d6a960ba31ebc29c740657dceaa12c0a11c482162ab5072: Status 404 returned error can't find the container with id 8a0dec8ab35e2ba92d6a960ba31ebc29c740657dceaa12c0a11c482162ab5072 Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.191951 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j7gh8"] Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.218216 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fdtv9"] Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.241710 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b608-account-create-update-6gg2k"] Mar 12 15:06:28 crc kubenswrapper[4832]: W0312 15:06:28.279075 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53942e7f_8c9b_4762_8f00_1f382fa40da8.slice/crio-b3d8baa21313a553f458ee8ceeda245ed85138a8e812cc9c6606d195887c2ed0 WatchSource:0}: Error finding container b3d8baa21313a553f458ee8ceeda245ed85138a8e812cc9c6606d195887c2ed0: Status 404 returned error can't find the container with id b3d8baa21313a553f458ee8ceeda245ed85138a8e812cc9c6606d195887c2ed0 Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.503459 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8694-account-create-update-dcf4c"] Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.581465 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2pdk5"] Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.602518 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-00a7-account-create-update-kzbhm"] Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.641536 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2828d553-f62d-41a5-bcf1-0c4857572aae" path="/var/lib/kubelet/pods/2828d553-f62d-41a5-bcf1-0c4857572aae/volumes" Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.642725 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2cf4887-c8d9-4166-b63d-62524092bde4" path="/var/lib/kubelet/pods/b2cf4887-c8d9-4166-b63d-62524092bde4/volumes" Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.951309 4832 generic.go:334] "Generic (PLEG): container finished" podID="53942e7f-8c9b-4762-8f00-1f382fa40da8" containerID="45c26146034a839c32ae2ef84930b9b9cf07c7cc02a278cf63059e116f822c28" exitCode=0 Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.951380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b608-account-create-update-6gg2k" event={"ID":"53942e7f-8c9b-4762-8f00-1f382fa40da8","Type":"ContainerDied","Data":"45c26146034a839c32ae2ef84930b9b9cf07c7cc02a278cf63059e116f822c28"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.951411 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b608-account-create-update-6gg2k" event={"ID":"53942e7f-8c9b-4762-8f00-1f382fa40da8","Type":"ContainerStarted","Data":"b3d8baa21313a553f458ee8ceeda245ed85138a8e812cc9c6606d195887c2ed0"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.952224 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-00a7-account-create-update-kzbhm" event={"ID":"7be99adb-61ba-476f-94ad-7e6015445091","Type":"ContainerStarted","Data":"e920e344ed8f3650bea7cefc072afb2b3ff0d6dabe156a6e70d4bd5e737f81a5"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.953133 4832 generic.go:334] "Generic (PLEG): container finished" podID="10c43e56-caa8-4f19-8ca7-52f6b551b0e6" containerID="5943410cce50b601e7ef4b212dbe01a2117d47c37961351321f07668805e5e5e" exitCode=0 Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.953187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j7gh8" event={"ID":"10c43e56-caa8-4f19-8ca7-52f6b551b0e6","Type":"ContainerDied","Data":"5943410cce50b601e7ef4b212dbe01a2117d47c37961351321f07668805e5e5e"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.953227 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j7gh8" event={"ID":"10c43e56-caa8-4f19-8ca7-52f6b551b0e6","Type":"ContainerStarted","Data":"fb0c200fd2d899268e60b3d2f6e6bbaba1ab4613c706a8bfa5a30849f5f1ded0"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.955081 4832 generic.go:334] "Generic (PLEG): container finished" podID="aa3ac3c3-bdf7-47c3-9e81-679ea2c44155" containerID="78d04d8c56396efeb67f93c34934cb499fe5068f424cf7cf5e5a938808f6df74" exitCode=0 Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.955161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fdtv9" event={"ID":"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155","Type":"ContainerDied","Data":"78d04d8c56396efeb67f93c34934cb499fe5068f424cf7cf5e5a938808f6df74"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.955186 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fdtv9" event={"ID":"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155","Type":"ContainerStarted","Data":"959c421d04e07363626979232cd69cd208aa8664a322754bfaf8e2ac78935b57"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.956748 4832 generic.go:334] "Generic (PLEG): container finished" podID="a2a71c99-f30a-4882-94b5-fb73111f41c2" containerID="7b9c130bc432728aedb0aa629352275e394c9a9b23a74a177a256d240d3d6b33" exitCode=0 Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.956786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-slglh" event={"ID":"a2a71c99-f30a-4882-94b5-fb73111f41c2","Type":"ContainerDied","Data":"7b9c130bc432728aedb0aa629352275e394c9a9b23a74a177a256d240d3d6b33"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.956800 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-slglh" event={"ID":"a2a71c99-f30a-4882-94b5-fb73111f41c2","Type":"ContainerStarted","Data":"8a0dec8ab35e2ba92d6a960ba31ebc29c740657dceaa12c0a11c482162ab5072"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.958165 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8694-account-create-update-dcf4c" event={"ID":"0504dba2-4849-450f-b10c-9669184c2820","Type":"ContainerStarted","Data":"7ab9e8f6ced3b89e3e743afc956f1d52bdf251fed28f38a83b18c1c477fd863c"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.958188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8694-account-create-update-dcf4c" event={"ID":"0504dba2-4849-450f-b10c-9669184c2820","Type":"ContainerStarted","Data":"33101ddaa3a0106d4cfa4ef1941845d18c686e415034c3f1fa948a1e003e2835"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.959884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pdk5" event={"ID":"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5","Type":"ContainerStarted","Data":"2ddcfbf5af46c322403eed79b0f34e08478731af8c61d2182ae4fc9cf97adce0"} Mar 12 15:06:28 crc kubenswrapper[4832]: I0312 15:06:28.989968 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8694-account-create-update-dcf4c" podStartSLOduration=1.9899522589999998 podStartE2EDuration="1.989952259s" podCreationTimestamp="2026-03-12 15:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:28.984229323 +0000 UTC m=+1147.628243569" watchObservedRunningTime="2026-03-12 15:06:28.989952259 +0000 UTC m=+1147.633966485" Mar 12 15:06:29 crc kubenswrapper[4832]: I0312 15:06:29.968221 4832 generic.go:334] "Generic (PLEG): container finished" podID="7be99adb-61ba-476f-94ad-7e6015445091" containerID="e49f6b2a0e4a87b23acd73b768c7d9122768bdef97e443e1beac18bb6e396bfb" exitCode=0 Mar 12 15:06:29 crc kubenswrapper[4832]: I0312 15:06:29.968264 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-00a7-account-create-update-kzbhm" event={"ID":"7be99adb-61ba-476f-94ad-7e6015445091","Type":"ContainerDied","Data":"e49f6b2a0e4a87b23acd73b768c7d9122768bdef97e443e1beac18bb6e396bfb"} Mar 12 15:06:29 crc kubenswrapper[4832]: I0312 15:06:29.970096 4832 generic.go:334] "Generic (PLEG): container finished" podID="0504dba2-4849-450f-b10c-9669184c2820" containerID="7ab9e8f6ced3b89e3e743afc956f1d52bdf251fed28f38a83b18c1c477fd863c" exitCode=0 Mar 12 15:06:29 crc kubenswrapper[4832]: I0312 15:06:29.970141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8694-account-create-update-dcf4c" event={"ID":"0504dba2-4849-450f-b10c-9669184c2820","Type":"ContainerDied","Data":"7ab9e8f6ced3b89e3e743afc956f1d52bdf251fed28f38a83b18c1c477fd863c"} Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.436887 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-slglh" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.444047 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.532877 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddfrl\" (UniqueName: \"kubernetes.io/projected/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-kube-api-access-ddfrl\") pod \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.532948 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a71c99-f30a-4882-94b5-fb73111f41c2-operator-scripts\") pod \"a2a71c99-f30a-4882-94b5-fb73111f41c2\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.532983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjm6n\" (UniqueName: \"kubernetes.io/projected/a2a71c99-f30a-4882-94b5-fb73111f41c2-kube-api-access-vjm6n\") pod \"a2a71c99-f30a-4882-94b5-fb73111f41c2\" (UID: \"a2a71c99-f30a-4882-94b5-fb73111f41c2\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.533053 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-operator-scripts\") pod \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\" (UID: \"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.533603 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a71c99-f30a-4882-94b5-fb73111f41c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2a71c99-f30a-4882-94b5-fb73111f41c2" (UID: "a2a71c99-f30a-4882-94b5-fb73111f41c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.533614 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa3ac3c3-bdf7-47c3-9e81-679ea2c44155" (UID: "aa3ac3c3-bdf7-47c3-9e81-679ea2c44155"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.538200 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-kube-api-access-ddfrl" (OuterVolumeSpecName: "kube-api-access-ddfrl") pod "aa3ac3c3-bdf7-47c3-9e81-679ea2c44155" (UID: "aa3ac3c3-bdf7-47c3-9e81-679ea2c44155"). InnerVolumeSpecName "kube-api-access-ddfrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.538600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a71c99-f30a-4882-94b5-fb73111f41c2-kube-api-access-vjm6n" (OuterVolumeSpecName: "kube-api-access-vjm6n") pod "a2a71c99-f30a-4882-94b5-fb73111f41c2" (UID: "a2a71c99-f30a-4882-94b5-fb73111f41c2"). InnerVolumeSpecName "kube-api-access-vjm6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.613343 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.618225 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.635708 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a71c99-f30a-4882-94b5-fb73111f41c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.635738 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjm6n\" (UniqueName: \"kubernetes.io/projected/a2a71c99-f30a-4882-94b5-fb73111f41c2-kube-api-access-vjm6n\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.635750 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.635760 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddfrl\" (UniqueName: \"kubernetes.io/projected/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155-kube-api-access-ddfrl\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.737148 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwrnk\" (UniqueName: \"kubernetes.io/projected/53942e7f-8c9b-4762-8f00-1f382fa40da8-kube-api-access-cwrnk\") pod \"53942e7f-8c9b-4762-8f00-1f382fa40da8\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.737190 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53942e7f-8c9b-4762-8f00-1f382fa40da8-operator-scripts\") pod \"53942e7f-8c9b-4762-8f00-1f382fa40da8\" (UID: \"53942e7f-8c9b-4762-8f00-1f382fa40da8\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.737231 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxcnk\" (UniqueName: \"kubernetes.io/projected/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-kube-api-access-gxcnk\") pod \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.737341 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-operator-scripts\") pod \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\" (UID: \"10c43e56-caa8-4f19-8ca7-52f6b551b0e6\") " Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.738008 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10c43e56-caa8-4f19-8ca7-52f6b551b0e6" (UID: "10c43e56-caa8-4f19-8ca7-52f6b551b0e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.738191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53942e7f-8c9b-4762-8f00-1f382fa40da8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53942e7f-8c9b-4762-8f00-1f382fa40da8" (UID: "53942e7f-8c9b-4762-8f00-1f382fa40da8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.740062 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53942e7f-8c9b-4762-8f00-1f382fa40da8-kube-api-access-cwrnk" (OuterVolumeSpecName: "kube-api-access-cwrnk") pod "53942e7f-8c9b-4762-8f00-1f382fa40da8" (UID: "53942e7f-8c9b-4762-8f00-1f382fa40da8"). InnerVolumeSpecName "kube-api-access-cwrnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.740358 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-kube-api-access-gxcnk" (OuterVolumeSpecName: "kube-api-access-gxcnk") pod "10c43e56-caa8-4f19-8ca7-52f6b551b0e6" (UID: "10c43e56-caa8-4f19-8ca7-52f6b551b0e6"). InnerVolumeSpecName "kube-api-access-gxcnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.839347 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwrnk\" (UniqueName: \"kubernetes.io/projected/53942e7f-8c9b-4762-8f00-1f382fa40da8-kube-api-access-cwrnk\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.839381 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53942e7f-8c9b-4762-8f00-1f382fa40da8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.839391 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxcnk\" (UniqueName: \"kubernetes.io/projected/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-kube-api-access-gxcnk\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.839402 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c43e56-caa8-4f19-8ca7-52f6b551b0e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.984027 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-slglh" event={"ID":"a2a71c99-f30a-4882-94b5-fb73111f41c2","Type":"ContainerDied","Data":"8a0dec8ab35e2ba92d6a960ba31ebc29c740657dceaa12c0a11c482162ab5072"} Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.984068 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a0dec8ab35e2ba92d6a960ba31ebc29c740657dceaa12c0a11c482162ab5072" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.984043 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-slglh" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.985841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b608-account-create-update-6gg2k" event={"ID":"53942e7f-8c9b-4762-8f00-1f382fa40da8","Type":"ContainerDied","Data":"b3d8baa21313a553f458ee8ceeda245ed85138a8e812cc9c6606d195887c2ed0"} Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.985913 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d8baa21313a553f458ee8ceeda245ed85138a8e812cc9c6606d195887c2ed0" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.986021 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b608-account-create-update-6gg2k" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.988927 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j7gh8" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.988957 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j7gh8" event={"ID":"10c43e56-caa8-4f19-8ca7-52f6b551b0e6","Type":"ContainerDied","Data":"fb0c200fd2d899268e60b3d2f6e6bbaba1ab4613c706a8bfa5a30849f5f1ded0"} Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.989026 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0c200fd2d899268e60b3d2f6e6bbaba1ab4613c706a8bfa5a30849f5f1ded0" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.990640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fdtv9" event={"ID":"aa3ac3c3-bdf7-47c3-9e81-679ea2c44155","Type":"ContainerDied","Data":"959c421d04e07363626979232cd69cd208aa8664a322754bfaf8e2ac78935b57"} Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.990685 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959c421d04e07363626979232cd69cd208aa8664a322754bfaf8e2ac78935b57" Mar 12 15:06:30 crc kubenswrapper[4832]: I0312 15:06:30.990733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fdtv9" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.848384 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5zp7p"] Mar 12 15:06:31 crc kubenswrapper[4832]: E0312 15:06:31.848997 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3ac3c3-bdf7-47c3-9e81-679ea2c44155" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849014 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3ac3c3-bdf7-47c3-9e81-679ea2c44155" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: E0312 15:06:31.849032 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a71c99-f30a-4882-94b5-fb73111f41c2" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849040 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a71c99-f30a-4882-94b5-fb73111f41c2" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: E0312 15:06:31.849050 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c43e56-caa8-4f19-8ca7-52f6b551b0e6" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849057 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c43e56-caa8-4f19-8ca7-52f6b551b0e6" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: E0312 15:06:31.849078 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53942e7f-8c9b-4762-8f00-1f382fa40da8" containerName="mariadb-account-create-update" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849086 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="53942e7f-8c9b-4762-8f00-1f382fa40da8" containerName="mariadb-account-create-update" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849290 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3ac3c3-bdf7-47c3-9e81-679ea2c44155" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849310 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c43e56-caa8-4f19-8ca7-52f6b551b0e6" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849324 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a71c99-f30a-4882-94b5-fb73111f41c2" containerName="mariadb-database-create" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849337 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="53942e7f-8c9b-4762-8f00-1f382fa40da8" containerName="mariadb-account-create-update" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.849921 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.858396 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.866275 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5zp7p"] Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.963312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27df49fc-7f53-4145-97b1-9acdbb768496-operator-scripts\") pod \"root-account-create-update-5zp7p\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:31 crc kubenswrapper[4832]: I0312 15:06:31.963446 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvtfw\" (UniqueName: \"kubernetes.io/projected/27df49fc-7f53-4145-97b1-9acdbb768496-kube-api-access-mvtfw\") pod \"root-account-create-update-5zp7p\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:32 crc kubenswrapper[4832]: I0312 15:06:32.065053 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvtfw\" (UniqueName: \"kubernetes.io/projected/27df49fc-7f53-4145-97b1-9acdbb768496-kube-api-access-mvtfw\") pod \"root-account-create-update-5zp7p\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:32 crc kubenswrapper[4832]: I0312 15:06:32.065225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27df49fc-7f53-4145-97b1-9acdbb768496-operator-scripts\") pod \"root-account-create-update-5zp7p\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:32 crc kubenswrapper[4832]: I0312 15:06:32.065989 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27df49fc-7f53-4145-97b1-9acdbb768496-operator-scripts\") pod \"root-account-create-update-5zp7p\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:32 crc kubenswrapper[4832]: I0312 15:06:32.085186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvtfw\" (UniqueName: \"kubernetes.io/projected/27df49fc-7f53-4145-97b1-9acdbb768496-kube-api-access-mvtfw\") pod \"root-account-create-update-5zp7p\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:32 crc kubenswrapper[4832]: I0312 15:06:32.173274 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.521594 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.524762 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.608413 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be99adb-61ba-476f-94ad-7e6015445091-operator-scripts\") pod \"7be99adb-61ba-476f-94ad-7e6015445091\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.609057 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6sk\" (UniqueName: \"kubernetes.io/projected/0504dba2-4849-450f-b10c-9669184c2820-kube-api-access-9n6sk\") pod \"0504dba2-4849-450f-b10c-9669184c2820\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.609540 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv9nx\" (UniqueName: \"kubernetes.io/projected/7be99adb-61ba-476f-94ad-7e6015445091-kube-api-access-lv9nx\") pod \"7be99adb-61ba-476f-94ad-7e6015445091\" (UID: \"7be99adb-61ba-476f-94ad-7e6015445091\") " Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.609578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be99adb-61ba-476f-94ad-7e6015445091-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7be99adb-61ba-476f-94ad-7e6015445091" (UID: "7be99adb-61ba-476f-94ad-7e6015445091"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.617586 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504dba2-4849-450f-b10c-9669184c2820-operator-scripts\") pod \"0504dba2-4849-450f-b10c-9669184c2820\" (UID: \"0504dba2-4849-450f-b10c-9669184c2820\") " Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.618301 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be99adb-61ba-476f-94ad-7e6015445091-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.618729 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0504dba2-4849-450f-b10c-9669184c2820-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0504dba2-4849-450f-b10c-9669184c2820" (UID: "0504dba2-4849-450f-b10c-9669184c2820"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.621709 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0504dba2-4849-450f-b10c-9669184c2820-kube-api-access-9n6sk" (OuterVolumeSpecName: "kube-api-access-9n6sk") pod "0504dba2-4849-450f-b10c-9669184c2820" (UID: "0504dba2-4849-450f-b10c-9669184c2820"). InnerVolumeSpecName "kube-api-access-9n6sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.621837 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be99adb-61ba-476f-94ad-7e6015445091-kube-api-access-lv9nx" (OuterVolumeSpecName: "kube-api-access-lv9nx") pod "7be99adb-61ba-476f-94ad-7e6015445091" (UID: "7be99adb-61ba-476f-94ad-7e6015445091"). InnerVolumeSpecName "kube-api-access-lv9nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.719286 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6sk\" (UniqueName: \"kubernetes.io/projected/0504dba2-4849-450f-b10c-9669184c2820-kube-api-access-9n6sk\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.719307 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv9nx\" (UniqueName: \"kubernetes.io/projected/7be99adb-61ba-476f-94ad-7e6015445091-kube-api-access-lv9nx\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:33 crc kubenswrapper[4832]: I0312 15:06:33.719316 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504dba2-4849-450f-b10c-9669184c2820-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.019530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pdk5" event={"ID":"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5","Type":"ContainerStarted","Data":"f09551b0d4eebac467ba24f2aaaceface48d97d0b642caaab285900ffa7761d1"} Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.020752 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-00a7-account-create-update-kzbhm" event={"ID":"7be99adb-61ba-476f-94ad-7e6015445091","Type":"ContainerDied","Data":"e920e344ed8f3650bea7cefc072afb2b3ff0d6dabe156a6e70d4bd5e737f81a5"} Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.020774 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e920e344ed8f3650bea7cefc072afb2b3ff0d6dabe156a6e70d4bd5e737f81a5" Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.020779 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00a7-account-create-update-kzbhm" Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.022782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8694-account-create-update-dcf4c" event={"ID":"0504dba2-4849-450f-b10c-9669184c2820","Type":"ContainerDied","Data":"33101ddaa3a0106d4cfa4ef1941845d18c686e415034c3f1fa948a1e003e2835"} Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.022805 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8694-account-create-update-dcf4c" Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.022826 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33101ddaa3a0106d4cfa4ef1941845d18c686e415034c3f1fa948a1e003e2835" Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.044817 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5zp7p"] Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.050140 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2pdk5" podStartSLOduration=2.090704372 podStartE2EDuration="7.050120697s" podCreationTimestamp="2026-03-12 15:06:27 +0000 UTC" firstStartedPulling="2026-03-12 15:06:28.558948346 +0000 UTC m=+1147.202962572" lastFinishedPulling="2026-03-12 15:06:33.518364671 +0000 UTC m=+1152.162378897" observedRunningTime="2026-03-12 15:06:34.036604855 +0000 UTC m=+1152.680619091" watchObservedRunningTime="2026-03-12 15:06:34.050120697 +0000 UTC m=+1152.694134923" Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.472691 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.555433 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4hx64"] Mar 12 15:06:34 crc kubenswrapper[4832]: I0312 15:06:34.555814 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" podUID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerName="dnsmasq-dns" containerID="cri-o://aabb2defc601f3d3797ce0b64b067866a91e85c962c904780f3178a960455189" gracePeriod=10 Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.043198 4832 generic.go:334] "Generic (PLEG): container finished" podID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerID="aabb2defc601f3d3797ce0b64b067866a91e85c962c904780f3178a960455189" exitCode=0 Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.043441 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" event={"ID":"e40347b7-ed6a-49e8-af1b-d361a332bf94","Type":"ContainerDied","Data":"aabb2defc601f3d3797ce0b64b067866a91e85c962c904780f3178a960455189"} Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.043597 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" event={"ID":"e40347b7-ed6a-49e8-af1b-d361a332bf94","Type":"ContainerDied","Data":"ab46b9523981e6d26173e665c95a0b3971ab5aab6955683972fd8053dfde19e2"} Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.043623 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab46b9523981e6d26173e665c95a0b3971ab5aab6955683972fd8053dfde19e2" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.048033 4832 generic.go:334] "Generic (PLEG): container finished" podID="27df49fc-7f53-4145-97b1-9acdbb768496" containerID="612633c1c418b6fe62958f6e429b4c03a92d3a33f959bc5499f431c6949a2f68" exitCode=0 Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.048863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zp7p" event={"ID":"27df49fc-7f53-4145-97b1-9acdbb768496","Type":"ContainerDied","Data":"612633c1c418b6fe62958f6e429b4c03a92d3a33f959bc5499f431c6949a2f68"} Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.048899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zp7p" event={"ID":"27df49fc-7f53-4145-97b1-9acdbb768496","Type":"ContainerStarted","Data":"83a5a0d9ad1e029d613b6970422ed6cf7e3f8bc1323773698f0a8a3d6886068f"} Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.066999 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.245573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-nb\") pod \"e40347b7-ed6a-49e8-af1b-d361a332bf94\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.245620 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh8mt\" (UniqueName: \"kubernetes.io/projected/e40347b7-ed6a-49e8-af1b-d361a332bf94-kube-api-access-vh8mt\") pod \"e40347b7-ed6a-49e8-af1b-d361a332bf94\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.245668 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-dns-svc\") pod \"e40347b7-ed6a-49e8-af1b-d361a332bf94\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.245734 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-sb\") pod \"e40347b7-ed6a-49e8-af1b-d361a332bf94\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.245759 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-config\") pod \"e40347b7-ed6a-49e8-af1b-d361a332bf94\" (UID: \"e40347b7-ed6a-49e8-af1b-d361a332bf94\") " Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.250824 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40347b7-ed6a-49e8-af1b-d361a332bf94-kube-api-access-vh8mt" (OuterVolumeSpecName: "kube-api-access-vh8mt") pod "e40347b7-ed6a-49e8-af1b-d361a332bf94" (UID: "e40347b7-ed6a-49e8-af1b-d361a332bf94"). InnerVolumeSpecName "kube-api-access-vh8mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.288307 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-config" (OuterVolumeSpecName: "config") pod "e40347b7-ed6a-49e8-af1b-d361a332bf94" (UID: "e40347b7-ed6a-49e8-af1b-d361a332bf94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.294831 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e40347b7-ed6a-49e8-af1b-d361a332bf94" (UID: "e40347b7-ed6a-49e8-af1b-d361a332bf94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.297697 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e40347b7-ed6a-49e8-af1b-d361a332bf94" (UID: "e40347b7-ed6a-49e8-af1b-d361a332bf94"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.301183 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e40347b7-ed6a-49e8-af1b-d361a332bf94" (UID: "e40347b7-ed6a-49e8-af1b-d361a332bf94"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.347845 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.347880 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh8mt\" (UniqueName: \"kubernetes.io/projected/e40347b7-ed6a-49e8-af1b-d361a332bf94-kube-api-access-vh8mt\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.347894 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.347902 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:35 crc kubenswrapper[4832]: I0312 15:06:35.347911 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e40347b7-ed6a-49e8-af1b-d361a332bf94-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.056234 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4hx64" Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.088528 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4hx64"] Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.096393 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4hx64"] Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.384433 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.564468 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvtfw\" (UniqueName: \"kubernetes.io/projected/27df49fc-7f53-4145-97b1-9acdbb768496-kube-api-access-mvtfw\") pod \"27df49fc-7f53-4145-97b1-9acdbb768496\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.564551 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27df49fc-7f53-4145-97b1-9acdbb768496-operator-scripts\") pod \"27df49fc-7f53-4145-97b1-9acdbb768496\" (UID: \"27df49fc-7f53-4145-97b1-9acdbb768496\") " Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.565136 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27df49fc-7f53-4145-97b1-9acdbb768496-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27df49fc-7f53-4145-97b1-9acdbb768496" (UID: "27df49fc-7f53-4145-97b1-9acdbb768496"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.569999 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27df49fc-7f53-4145-97b1-9acdbb768496-kube-api-access-mvtfw" (OuterVolumeSpecName: "kube-api-access-mvtfw") pod "27df49fc-7f53-4145-97b1-9acdbb768496" (UID: "27df49fc-7f53-4145-97b1-9acdbb768496"). InnerVolumeSpecName "kube-api-access-mvtfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.630342 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40347b7-ed6a-49e8-af1b-d361a332bf94" path="/var/lib/kubelet/pods/e40347b7-ed6a-49e8-af1b-d361a332bf94/volumes" Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.666549 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvtfw\" (UniqueName: \"kubernetes.io/projected/27df49fc-7f53-4145-97b1-9acdbb768496-kube-api-access-mvtfw\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:36 crc kubenswrapper[4832]: I0312 15:06:36.666586 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27df49fc-7f53-4145-97b1-9acdbb768496-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:37 crc kubenswrapper[4832]: I0312 15:06:37.068619 4832 generic.go:334] "Generic (PLEG): container finished" podID="58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" containerID="f09551b0d4eebac467ba24f2aaaceface48d97d0b642caaab285900ffa7761d1" exitCode=0 Mar 12 15:06:37 crc kubenswrapper[4832]: I0312 15:06:37.070044 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pdk5" event={"ID":"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5","Type":"ContainerDied","Data":"f09551b0d4eebac467ba24f2aaaceface48d97d0b642caaab285900ffa7761d1"} Mar 12 15:06:37 crc kubenswrapper[4832]: I0312 15:06:37.071687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zp7p" event={"ID":"27df49fc-7f53-4145-97b1-9acdbb768496","Type":"ContainerDied","Data":"83a5a0d9ad1e029d613b6970422ed6cf7e3f8bc1323773698f0a8a3d6886068f"} Mar 12 15:06:37 crc kubenswrapper[4832]: I0312 15:06:37.071719 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83a5a0d9ad1e029d613b6970422ed6cf7e3f8bc1323773698f0a8a3d6886068f" Mar 12 15:06:37 crc kubenswrapper[4832]: I0312 15:06:37.071765 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zp7p" Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.401041 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.497717 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl26n\" (UniqueName: \"kubernetes.io/projected/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-kube-api-access-kl26n\") pod \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.497836 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-config-data\") pod \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.497900 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-combined-ca-bundle\") pod \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\" (UID: \"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5\") " Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.503607 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-kube-api-access-kl26n" (OuterVolumeSpecName: "kube-api-access-kl26n") pod "58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" (UID: "58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5"). InnerVolumeSpecName "kube-api-access-kl26n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.528491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" (UID: "58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.545121 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-config-data" (OuterVolumeSpecName: "config-data") pod "58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" (UID: "58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.599314 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.599346 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:38 crc kubenswrapper[4832]: I0312 15:06:38.599358 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl26n\" (UniqueName: \"kubernetes.io/projected/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5-kube-api-access-kl26n\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.092775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pdk5" event={"ID":"58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5","Type":"ContainerDied","Data":"2ddcfbf5af46c322403eed79b0f34e08478731af8c61d2182ae4fc9cf97adce0"} Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.093031 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ddcfbf5af46c322403eed79b0f34e08478731af8c61d2182ae4fc9cf97adce0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.092822 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pdk5" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.317584 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fcwhd"] Mar 12 15:06:39 crc kubenswrapper[4832]: E0312 15:06:39.318474 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerName="init" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.318498 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerName="init" Mar 12 15:06:39 crc kubenswrapper[4832]: E0312 15:06:39.318567 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0504dba2-4849-450f-b10c-9669184c2820" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.318580 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0504dba2-4849-450f-b10c-9669184c2820" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: E0312 15:06:39.318602 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27df49fc-7f53-4145-97b1-9acdbb768496" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.318614 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="27df49fc-7f53-4145-97b1-9acdbb768496" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: E0312 15:06:39.318636 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be99adb-61ba-476f-94ad-7e6015445091" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.318647 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be99adb-61ba-476f-94ad-7e6015445091" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: E0312 15:06:39.318670 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" containerName="keystone-db-sync" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.318679 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" containerName="keystone-db-sync" Mar 12 15:06:39 crc kubenswrapper[4832]: E0312 15:06:39.318698 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerName="dnsmasq-dns" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.318709 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerName="dnsmasq-dns" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.319019 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40347b7-ed6a-49e8-af1b-d361a332bf94" containerName="dnsmasq-dns" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.319042 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="27df49fc-7f53-4145-97b1-9acdbb768496" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.319068 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0504dba2-4849-450f-b10c-9669184c2820" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.319085 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" containerName="keystone-db-sync" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.319101 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be99adb-61ba-476f-94ad-7e6015445091" containerName="mariadb-account-create-update" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.320532 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.336821 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fcwhd"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.385015 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7pnpk"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.386302 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.390745 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gszf" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.390769 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.390977 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.391099 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.391744 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.403217 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7pnpk"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.415386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-config\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.415448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chppj\" (UniqueName: \"kubernetes.io/projected/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-kube-api-access-chppj\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.415490 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.415603 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.415641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.415678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517631 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-credential-keys\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-scripts\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517714 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-config-data\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517739 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517780 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-combined-ca-bundle\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-config\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517911 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chppj\" (UniqueName: \"kubernetes.io/projected/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-kube-api-access-chppj\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517932 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-fernet-keys\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.517982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlr7m\" (UniqueName: \"kubernetes.io/projected/4e955107-9355-4511-b7f3-6171b221d884-kube-api-access-jlr7m\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.518910 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.519342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-config\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.519761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.521627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.523072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.542232 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2hkzk"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.543173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.565119 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.565728 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-g729q" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.565910 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.582252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chppj\" (UniqueName: \"kubernetes.io/projected/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-kube-api-access-chppj\") pod \"dnsmasq-dns-bbf5cc879-fcwhd\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.584067 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2hkzk"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.607648 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56ddd5f6f9-n8mxv"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.632787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-scripts\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.632850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-config-data\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.632908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-config-data\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.632983 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-combined-ca-bundle\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633022 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdjb\" (UniqueName: \"kubernetes.io/projected/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-kube-api-access-8jdjb\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-fernet-keys\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633109 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-scripts\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-combined-ca-bundle\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633150 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-db-sync-config-data\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlr7m\" (UniqueName: \"kubernetes.io/projected/4e955107-9355-4511-b7f3-6171b221d884-kube-api-access-jlr7m\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-credential-keys\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.633220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-etc-machine-id\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.647718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.660426 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-config-data\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.665972 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-combined-ca-bundle\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.720167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlr7m\" (UniqueName: \"kubernetes.io/projected/4e955107-9355-4511-b7f3-6171b221d884-kube-api-access-jlr7m\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.720168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-credential-keys\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.720714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-fernet-keys\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.721344 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.747414 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.747649 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.747775 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9z6sn" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.747902 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.748007 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56ddd5f6f9-n8mxv"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.750627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-scripts\") pod \"keystone-bootstrap-7pnpk\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.763346 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdjb\" (UniqueName: \"kubernetes.io/projected/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-kube-api-access-8jdjb\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.763784 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-scripts\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.763818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-combined-ca-bundle\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.763837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-db-sync-config-data\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.763866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-etc-machine-id\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.763950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-config-data\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.767815 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-config-data\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.769744 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-etc-machine-id\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.796779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-combined-ca-bundle\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.797130 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-db-sync-config-data\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.813656 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-scripts\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.820786 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sf9s6"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.822385 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.842989 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdjb\" (UniqueName: \"kubernetes.io/projected/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-kube-api-access-8jdjb\") pod \"cinder-db-sync-2hkzk\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.852515 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.852810 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.853359 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vrq42" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.884405 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-scripts\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.884457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-config-data\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.884614 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-horizon-secret-key\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.884897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-logs\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.884955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxf86\" (UniqueName: \"kubernetes.io/projected/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-kube-api-access-bxf86\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.896455 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.898342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.911496 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sf9s6"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.915365 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.915564 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.930851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.940948 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.976650 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-brwnc"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.977629 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.984772 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-pz5qf"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.985784 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.991762 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sdzt6" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.991957 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992074 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992150 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqp46\" (UniqueName: \"kubernetes.io/projected/fde061f5-d765-49d5-9cff-77ac3d31dd40-kube-api-access-qqp46\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-config\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992261 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-logs\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992282 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-combined-ca-bundle\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-config-data\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992314 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj4lg\" (UniqueName: \"kubernetes.io/projected/40b2fcd2-d826-44f8-a9e1-125b17905fae-kube-api-access-qj4lg\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxf86\" (UniqueName: \"kubernetes.io/projected/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-kube-api-access-bxf86\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-log-httpd\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992367 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-run-httpd\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-scripts\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992398 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-config-data\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-scripts\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.992522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-horizon-secret-key\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.993665 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fcwhd"] Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.996916 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-logs\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:39 crc kubenswrapper[4832]: I0312 15:06:39.997680 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-horizon-secret-key\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.004794 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5t7mn" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.005051 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.007413 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-brwnc"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.007896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-scripts\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.008443 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-config-data\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.025562 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pz5qf"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.032833 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.053052 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxf86\" (UniqueName: \"kubernetes.io/projected/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-kube-api-access-bxf86\") pod \"horizon-56ddd5f6f9-n8mxv\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.090228 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lp2dv"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.092050 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-log-httpd\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-run-httpd\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-scripts\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096813 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096839 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdded1bd-9b32-465d-9226-618cf5d0e8bb-logs\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrgc\" (UniqueName: \"kubernetes.io/projected/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-kube-api-access-wgrgc\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096879 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqp46\" (UniqueName: \"kubernetes.io/projected/fde061f5-d765-49d5-9cff-77ac3d31dd40-kube-api-access-qqp46\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-combined-ca-bundle\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-combined-ca-bundle\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-scripts\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.096977 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-config-data\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097006 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-config\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097037 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-db-sync-config-data\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-combined-ca-bundle\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097098 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-config-data\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj4lg\" (UniqueName: \"kubernetes.io/projected/40b2fcd2-d826-44f8-a9e1-125b17905fae-kube-api-access-qj4lg\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097142 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmb2\" (UniqueName: \"kubernetes.io/projected/bdded1bd-9b32-465d-9226-618cf5d0e8bb-kube-api-access-vjmb2\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097572 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-log-httpd\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.097770 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-run-httpd\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.120763 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.121944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-combined-ca-bundle\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.130208 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.133580 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.163698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-config-data\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.163764 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-549d6b9b97-cjxdq"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.165115 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-scripts\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.165139 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.171102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-config\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.175172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj4lg\" (UniqueName: \"kubernetes.io/projected/40b2fcd2-d826-44f8-a9e1-125b17905fae-kube-api-access-qj4lg\") pod \"neutron-db-sync-sf9s6\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.191850 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lp2dv"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.199354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqp46\" (UniqueName: \"kubernetes.io/projected/fde061f5-d765-49d5-9cff-77ac3d31dd40-kube-api-access-qqp46\") pod \"ceilometer-0\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200139 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-db-sync-config-data\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200358 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mldp\" (UniqueName: \"kubernetes.io/projected/ce9a0e38-bff9-4748-85a4-19e165398bae-kube-api-access-4mldp\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200427 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmb2\" (UniqueName: \"kubernetes.io/projected/bdded1bd-9b32-465d-9226-618cf5d0e8bb-kube-api-access-vjmb2\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdded1bd-9b32-465d-9226-618cf5d0e8bb-logs\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgrgc\" (UniqueName: \"kubernetes.io/projected/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-kube-api-access-wgrgc\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-combined-ca-bundle\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-config\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.200844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.201198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdded1bd-9b32-465d-9226-618cf5d0e8bb-logs\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.201381 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-combined-ca-bundle\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.201573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-scripts\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.201673 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-config-data\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.236800 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-scripts\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.236937 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-combined-ca-bundle\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.238744 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-549d6b9b97-cjxdq"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.242285 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-db-sync-config-data\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.245541 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-combined-ca-bundle\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.247404 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.248776 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-config-data\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.254251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgrgc\" (UniqueName: \"kubernetes.io/projected/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-kube-api-access-wgrgc\") pod \"barbican-db-sync-brwnc\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.255552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmb2\" (UniqueName: \"kubernetes.io/projected/bdded1bd-9b32-465d-9226-618cf5d0e8bb-kube-api-access-vjmb2\") pod \"placement-db-sync-pz5qf\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.298389 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.299570 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.299648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.302539 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.302570 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65cqt" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.302736 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.302804 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.307858 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-brwnc" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315358 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mldp\" (UniqueName: \"kubernetes.io/projected/ce9a0e38-bff9-4748-85a4-19e165398bae-kube-api-access-4mldp\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-config-data\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315489 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19af684a-ef80-4214-89c5-4c184c4ca0d6-horizon-secret-key\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315357 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pz5qf" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-scripts\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315603 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315628 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-config\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315691 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315716 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4mkl\" (UniqueName: \"kubernetes.io/projected/19af684a-ef80-4214-89c5-4c184c4ca0d6-kube-api-access-m4mkl\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19af684a-ef80-4214-89c5-4c184c4ca0d6-logs\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.315828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.316417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.316704 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.318802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.324764 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-config\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.324849 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.332858 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mldp\" (UniqueName: \"kubernetes.io/projected/ce9a0e38-bff9-4748-85a4-19e165398bae-kube-api-access-4mldp\") pod \"dnsmasq-dns-56df8fb6b7-lp2dv\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419381 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-logs\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419556 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xj9g\" (UniqueName: \"kubernetes.io/projected/9bb63dfa-bae4-4b9e-a397-86733ad67149-kube-api-access-4xj9g\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-config-data\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19af684a-ef80-4214-89c5-4c184c4ca0d6-horizon-secret-key\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-scripts\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419914 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19af684a-ef80-4214-89c5-4c184c4ca0d6-logs\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.419999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4mkl\" (UniqueName: \"kubernetes.io/projected/19af684a-ef80-4214-89c5-4c184c4ca0d6-kube-api-access-m4mkl\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.420090 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.420113 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.421811 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-config-data\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.423521 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-scripts\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.423578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19af684a-ef80-4214-89c5-4c184c4ca0d6-logs\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.432100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19af684a-ef80-4214-89c5-4c184c4ca0d6-horizon-secret-key\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.444674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4mkl\" (UniqueName: \"kubernetes.io/projected/19af684a-ef80-4214-89c5-4c184c4ca0d6-kube-api-access-m4mkl\") pod \"horizon-549d6b9b97-cjxdq\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.453851 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fcwhd"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.466218 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.468280 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521312 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-logs\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xj9g\" (UniqueName: \"kubernetes.io/projected/9bb63dfa-bae4-4b9e-a397-86733ad67149-kube-api-access-4xj9g\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.521461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.522250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.522295 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.522783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-logs\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.529550 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.531350 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.532213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.543165 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.549238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xj9g\" (UniqueName: \"kubernetes.io/projected/9bb63dfa-bae4-4b9e-a397-86733ad67149-kube-api-access-4xj9g\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.565637 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.576251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.617689 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.692610 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2hkzk"] Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.854908 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56ddd5f6f9-n8mxv"] Mar 12 15:06:40 crc kubenswrapper[4832]: W0312 15:06:40.877631 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96736cea_2309_4ec2_b1b5_edb4b4aafb2b.slice/crio-4d3431a32494331a4c392a90d4f7d122b1676b2f742d2411e0e3669e9a85f5b2 WatchSource:0}: Error finding container 4d3431a32494331a4c392a90d4f7d122b1676b2f742d2411e0e3669e9a85f5b2: Status 404 returned error can't find the container with id 4d3431a32494331a4c392a90d4f7d122b1676b2f742d2411e0e3669e9a85f5b2 Mar 12 15:06:40 crc kubenswrapper[4832]: I0312 15:06:40.883220 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7pnpk"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.045324 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.047166 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.050608 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.050938 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.060076 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:06:41 crc kubenswrapper[4832]: W0312 15:06:41.101884 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f2adafe_55f5_4149_893d_bdf63ec5ef7d.slice/crio-9d82a762554b0fee4c1c86fca26ff10dfa54b136e9a5a6db11413f2a96224ab7 WatchSource:0}: Error finding container 9d82a762554b0fee4c1c86fca26ff10dfa54b136e9a5a6db11413f2a96224ab7: Status 404 returned error can't find the container with id 9d82a762554b0fee4c1c86fca26ff10dfa54b136e9a5a6db11413f2a96224ab7 Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.102415 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.121306 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-brwnc"] Mar 12 15:06:41 crc kubenswrapper[4832]: W0312 15:06:41.134211 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfde061f5_d765_49d5_9cff_77ac3d31dd40.slice/crio-833b6e5c81219070f273fac99c1b9302ac25733551609e590c92a352b7f39640 WatchSource:0}: Error finding container 833b6e5c81219070f273fac99c1b9302ac25733551609e590c92a352b7f39640: Status 404 returned error can't find the container with id 833b6e5c81219070f273fac99c1b9302ac25733551609e590c92a352b7f39640 Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141069 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq8xl\" (UniqueName: \"kubernetes.io/projected/da56c936-1396-41c7-a17b-bd5a3bb43c05-kube-api-access-gq8xl\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141174 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-logs\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141258 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141292 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.141314 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.143041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pnpk" event={"ID":"4e955107-9355-4511-b7f3-6171b221d884","Type":"ContainerStarted","Data":"3d2e76ed71c732d624850e13ed167b96c0076ec27ff06ad6eeed15dde1df0ce7"} Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.148335 4832 generic.go:334] "Generic (PLEG): container finished" podID="cbf4612a-8aa9-4f2a-ad97-a6262e51693a" containerID="8c815832e8308ef7bb3598fc82b112a1d84992f83e0550f97f390872e3277e94" exitCode=0 Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.148566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" event={"ID":"cbf4612a-8aa9-4f2a-ad97-a6262e51693a","Type":"ContainerDied","Data":"8c815832e8308ef7bb3598fc82b112a1d84992f83e0550f97f390872e3277e94"} Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.148617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" event={"ID":"cbf4612a-8aa9-4f2a-ad97-a6262e51693a","Type":"ContainerStarted","Data":"4337982f113e515164fbf5bf63e04fe9b6dda3aa194b54cace388f42eecf0dc3"} Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.153013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56ddd5f6f9-n8mxv" event={"ID":"96736cea-2309-4ec2-b1b5-edb4b4aafb2b","Type":"ContainerStarted","Data":"4d3431a32494331a4c392a90d4f7d122b1676b2f742d2411e0e3669e9a85f5b2"} Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.158998 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-brwnc" event={"ID":"6f2adafe-55f5-4149-893d-bdf63ec5ef7d","Type":"ContainerStarted","Data":"9d82a762554b0fee4c1c86fca26ff10dfa54b136e9a5a6db11413f2a96224ab7"} Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.162248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hkzk" event={"ID":"8a7d0054-4697-4cbb-bc50-18024fc3bfbc","Type":"ContainerStarted","Data":"60e260f83e96574ad3b6326fc262920a98b192e3b1c0a03bbb91aa319ae2688d"} Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.218190 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pz5qf"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq8xl\" (UniqueName: \"kubernetes.io/projected/da56c936-1396-41c7-a17b-bd5a3bb43c05-kube-api-access-gq8xl\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243309 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243325 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243352 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-logs\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243393 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.243565 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.245686 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sf9s6"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.245968 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.246290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.248865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.249567 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-logs\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.252263 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.254339 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lp2dv"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.256708 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.267553 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq8xl\" (UniqueName: \"kubernetes.io/projected/da56c936-1396-41c7-a17b-bd5a3bb43c05-kube-api-access-gq8xl\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.268116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.277975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.376987 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.439303 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-549d6b9b97-cjxdq"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.531784 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.580925 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.651473 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-config\") pod \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.651587 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-swift-storage-0\") pod \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.651646 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chppj\" (UniqueName: \"kubernetes.io/projected/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-kube-api-access-chppj\") pod \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.651695 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-sb\") pod \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.651718 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-svc\") pod \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.651935 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-nb\") pod \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\" (UID: \"cbf4612a-8aa9-4f2a-ad97-a6262e51693a\") " Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.667372 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-kube-api-access-chppj" (OuterVolumeSpecName: "kube-api-access-chppj") pod "cbf4612a-8aa9-4f2a-ad97-a6262e51693a" (UID: "cbf4612a-8aa9-4f2a-ad97-a6262e51693a"). InnerVolumeSpecName "kube-api-access-chppj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.699854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cbf4612a-8aa9-4f2a-ad97-a6262e51693a" (UID: "cbf4612a-8aa9-4f2a-ad97-a6262e51693a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.704028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cbf4612a-8aa9-4f2a-ad97-a6262e51693a" (UID: "cbf4612a-8aa9-4f2a-ad97-a6262e51693a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.718566 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cbf4612a-8aa9-4f2a-ad97-a6262e51693a" (UID: "cbf4612a-8aa9-4f2a-ad97-a6262e51693a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.740624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-config" (OuterVolumeSpecName: "config") pod "cbf4612a-8aa9-4f2a-ad97-a6262e51693a" (UID: "cbf4612a-8aa9-4f2a-ad97-a6262e51693a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.744820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cbf4612a-8aa9-4f2a-ad97-a6262e51693a" (UID: "cbf4612a-8aa9-4f2a-ad97-a6262e51693a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.754167 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.754195 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.754204 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.754213 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.754223 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:41 crc kubenswrapper[4832]: I0312 15:06:41.754232 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chppj\" (UniqueName: \"kubernetes.io/projected/cbf4612a-8aa9-4f2a-ad97-a6262e51693a-kube-api-access-chppj\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.089096 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.129942 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.180097 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-549d6b9b97-cjxdq"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.220807 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65d6478577-sl5mq"] Mar 12 15:06:42 crc kubenswrapper[4832]: E0312 15:06:42.221811 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf4612a-8aa9-4f2a-ad97-a6262e51693a" containerName="init" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.221843 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf4612a-8aa9-4f2a-ad97-a6262e51693a" containerName="init" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.222457 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf4612a-8aa9-4f2a-ad97-a6262e51693a" containerName="init" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.226270 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.226648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerStarted","Data":"833b6e5c81219070f273fac99c1b9302ac25733551609e590c92a352b7f39640"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.235785 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65d6478577-sl5mq"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.246403 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pnpk" event={"ID":"4e955107-9355-4511-b7f3-6171b221d884","Type":"ContainerStarted","Data":"b82a8c4b0b054a192a4cf5b804079aad3e000cffc5cd3cc5114fede40c2fa717"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.266797 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da56c936-1396-41c7-a17b-bd5a3bb43c05","Type":"ContainerStarted","Data":"1b4fc6f136659b276f4946b6ae231947e423acfa77f5140e52fc7544ea9027f9"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.277497 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sf9s6" event={"ID":"40b2fcd2-d826-44f8-a9e1-125b17905fae","Type":"ContainerStarted","Data":"27d3c0f53caf72eaced680d7ed078fe786c01365274749f744a9ae440320d658"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.277581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sf9s6" event={"ID":"40b2fcd2-d826-44f8-a9e1-125b17905fae","Type":"ContainerStarted","Data":"ec6a9ee63cef22e7977b58db77bee7d24819a23e435f5d97254ec43564c1c3bc"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.285327 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.291762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pz5qf" event={"ID":"bdded1bd-9b32-465d-9226-618cf5d0e8bb","Type":"ContainerStarted","Data":"33c2e8e8b215c2df6e1c7b982ffb2da46592587dcf55f9fb7831c1c18cfe6614"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.296868 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.308013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549d6b9b97-cjxdq" event={"ID":"19af684a-ef80-4214-89c5-4c184c4ca0d6","Type":"ContainerStarted","Data":"60de4d264e831749f5814cb9c2787b88a0191e0a5aa966e17d094396a073c272"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.319871 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bb63dfa-bae4-4b9e-a397-86733ad67149","Type":"ContainerStarted","Data":"9541674410e03113be7d2a77da66fca08875612bef1873580ee6a6a3fa06d969"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.324023 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7pnpk" podStartSLOduration=3.324001852 podStartE2EDuration="3.324001852s" podCreationTimestamp="2026-03-12 15:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:42.288206093 +0000 UTC m=+1160.932220329" watchObservedRunningTime="2026-03-12 15:06:42.324001852 +0000 UTC m=+1160.968016078" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.330562 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sf9s6" podStartSLOduration=3.330548511 podStartE2EDuration="3.330548511s" podCreationTimestamp="2026-03-12 15:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:42.315018831 +0000 UTC m=+1160.959033057" watchObservedRunningTime="2026-03-12 15:06:42.330548511 +0000 UTC m=+1160.974562737" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.337671 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerID="b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf" exitCode=0 Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.337898 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" event={"ID":"ce9a0e38-bff9-4748-85a4-19e165398bae","Type":"ContainerDied","Data":"b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.337923 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" event={"ID":"ce9a0e38-bff9-4748-85a4-19e165398bae","Type":"ContainerStarted","Data":"9f30f1245c55c51b1c56f0d888585ca5ab371887f781878d7210d79c25672046"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.368793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-scripts\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.368841 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbf6b\" (UniqueName: \"kubernetes.io/projected/348acb62-dfd9-46b8-9411-4706a2ca646f-kube-api-access-wbf6b\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.368881 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/348acb62-dfd9-46b8-9411-4706a2ca646f-logs\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.368955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-config-data\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.368984 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/348acb62-dfd9-46b8-9411-4706a2ca646f-horizon-secret-key\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.373279 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" event={"ID":"cbf4612a-8aa9-4f2a-ad97-a6262e51693a","Type":"ContainerDied","Data":"4337982f113e515164fbf5bf63e04fe9b6dda3aa194b54cace388f42eecf0dc3"} Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.373341 4832 scope.go:117] "RemoveContainer" containerID="8c815832e8308ef7bb3598fc82b112a1d84992f83e0550f97f390872e3277e94" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.373526 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fcwhd" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.474635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-scripts\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.474694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbf6b\" (UniqueName: \"kubernetes.io/projected/348acb62-dfd9-46b8-9411-4706a2ca646f-kube-api-access-wbf6b\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.474747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/348acb62-dfd9-46b8-9411-4706a2ca646f-logs\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.474815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-config-data\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.474847 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/348acb62-dfd9-46b8-9411-4706a2ca646f-horizon-secret-key\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.476308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-scripts\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.479275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/348acb62-dfd9-46b8-9411-4706a2ca646f-logs\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.481619 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-config-data\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.482830 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/348acb62-dfd9-46b8-9411-4706a2ca646f-horizon-secret-key\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.548632 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fcwhd"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.572032 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fcwhd"] Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.594188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbf6b\" (UniqueName: \"kubernetes.io/projected/348acb62-dfd9-46b8-9411-4706a2ca646f-kube-api-access-wbf6b\") pod \"horizon-65d6478577-sl5mq\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.760653 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf4612a-8aa9-4f2a-ad97-a6262e51693a" path="/var/lib/kubelet/pods/cbf4612a-8aa9-4f2a-ad97-a6262e51693a/volumes" Mar 12 15:06:42 crc kubenswrapper[4832]: I0312 15:06:42.885825 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:06:43 crc kubenswrapper[4832]: I0312 15:06:43.405624 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bb63dfa-bae4-4b9e-a397-86733ad67149","Type":"ContainerStarted","Data":"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c"} Mar 12 15:06:43 crc kubenswrapper[4832]: I0312 15:06:43.409825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" event={"ID":"ce9a0e38-bff9-4748-85a4-19e165398bae","Type":"ContainerStarted","Data":"fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e"} Mar 12 15:06:43 crc kubenswrapper[4832]: I0312 15:06:43.419851 4832 scope.go:117] "RemoveContainer" containerID="d083ee601fb1cd07d42d4cf6482f91a8dd5368b3a691a496e07dc39027bd9ab6" Mar 12 15:06:43 crc kubenswrapper[4832]: I0312 15:06:43.426224 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" podStartSLOduration=4.426207335 podStartE2EDuration="4.426207335s" podCreationTimestamp="2026-03-12 15:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:43.425200126 +0000 UTC m=+1162.069214352" watchObservedRunningTime="2026-03-12 15:06:43.426207335 +0000 UTC m=+1162.070221561" Mar 12 15:06:43 crc kubenswrapper[4832]: I0312 15:06:43.491600 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65d6478577-sl5mq"] Mar 12 15:06:44 crc kubenswrapper[4832]: I0312 15:06:44.420065 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da56c936-1396-41c7-a17b-bd5a3bb43c05","Type":"ContainerStarted","Data":"f039486d5968a082ee59bdbab60e30976379b46da04eb21bddf134ed51b78318"} Mar 12 15:06:44 crc kubenswrapper[4832]: I0312 15:06:44.426327 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bb63dfa-bae4-4b9e-a397-86733ad67149","Type":"ContainerStarted","Data":"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596"} Mar 12 15:06:44 crc kubenswrapper[4832]: I0312 15:06:44.426415 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-log" containerID="cri-o://98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c" gracePeriod=30 Mar 12 15:06:44 crc kubenswrapper[4832]: I0312 15:06:44.426572 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-httpd" containerID="cri-o://cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596" gracePeriod=30 Mar 12 15:06:44 crc kubenswrapper[4832]: I0312 15:06:44.428778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d6478577-sl5mq" event={"ID":"348acb62-dfd9-46b8-9411-4706a2ca646f","Type":"ContainerStarted","Data":"159655f2351d02b1a69abc7d44c1b1133adee6c772811345adb30d82012af24c"} Mar 12 15:06:44 crc kubenswrapper[4832]: I0312 15:06:44.428851 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:44 crc kubenswrapper[4832]: I0312 15:06:44.464216 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.464147665 podStartE2EDuration="4.464147665s" podCreationTimestamp="2026-03-12 15:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:44.450645443 +0000 UTC m=+1163.094659699" watchObservedRunningTime="2026-03-12 15:06:44.464147665 +0000 UTC m=+1163.108161891" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.256563 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444174 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-logs\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444496 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-combined-ca-bundle\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444553 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-config-data\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444667 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-scripts\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444685 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-httpd-run\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444706 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xj9g\" (UniqueName: \"kubernetes.io/projected/9bb63dfa-bae4-4b9e-a397-86733ad67149-kube-api-access-4xj9g\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.444726 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-public-tls-certs\") pod \"9bb63dfa-bae4-4b9e-a397-86733ad67149\" (UID: \"9bb63dfa-bae4-4b9e-a397-86733ad67149\") " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.449583 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.452882 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-logs" (OuterVolumeSpecName: "logs") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.459789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462034 4832 generic.go:334] "Generic (PLEG): container finished" podID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerID="cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596" exitCode=143 Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462081 4832 generic.go:334] "Generic (PLEG): container finished" podID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerID="98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c" exitCode=143 Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bb63dfa-bae4-4b9e-a397-86733ad67149","Type":"ContainerDied","Data":"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596"} Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bb63dfa-bae4-4b9e-a397-86733ad67149","Type":"ContainerDied","Data":"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c"} Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462191 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bb63dfa-bae4-4b9e-a397-86733ad67149","Type":"ContainerDied","Data":"9541674410e03113be7d2a77da66fca08875612bef1873580ee6a6a3fa06d969"} Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462208 4832 scope.go:117] "RemoveContainer" containerID="cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462203 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-scripts" (OuterVolumeSpecName: "scripts") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.462339 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.477219 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb63dfa-bae4-4b9e-a397-86733ad67149-kube-api-access-4xj9g" (OuterVolumeSpecName: "kube-api-access-4xj9g") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "kube-api-access-4xj9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.498028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.498383 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da56c936-1396-41c7-a17b-bd5a3bb43c05","Type":"ContainerStarted","Data":"0ec5938165c3f96c222912ae475d24cfa26efcab8c801cfe1be4364abe748a08"} Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.498380 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-httpd" containerID="cri-o://0ec5938165c3f96c222912ae475d24cfa26efcab8c801cfe1be4364abe748a08" gracePeriod=30 Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.498092 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-log" containerID="cri-o://f039486d5968a082ee59bdbab60e30976379b46da04eb21bddf134ed51b78318" gracePeriod=30 Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.536271 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.541612 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-config-data" (OuterVolumeSpecName: "config-data") pod "9bb63dfa-bae4-4b9e-a397-86733ad67149" (UID: "9bb63dfa-bae4-4b9e-a397-86733ad67149"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547198 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547234 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547244 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547252 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xj9g\" (UniqueName: \"kubernetes.io/projected/9bb63dfa-bae4-4b9e-a397-86733ad67149-kube-api-access-4xj9g\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547263 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547271 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb63dfa-bae4-4b9e-a397-86733ad67149-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547279 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb63dfa-bae4-4b9e-a397-86733ad67149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.547296 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.576159 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.621085 4832 scope.go:117] "RemoveContainer" containerID="98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.650599 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.714947 4832 scope.go:117] "RemoveContainer" containerID="cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596" Mar 12 15:06:45 crc kubenswrapper[4832]: E0312 15:06:45.719198 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596\": container with ID starting with cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596 not found: ID does not exist" containerID="cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.719243 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596"} err="failed to get container status \"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596\": rpc error: code = NotFound desc = could not find container \"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596\": container with ID starting with cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596 not found: ID does not exist" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.719272 4832 scope.go:117] "RemoveContainer" containerID="98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c" Mar 12 15:06:45 crc kubenswrapper[4832]: E0312 15:06:45.722307 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c\": container with ID starting with 98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c not found: ID does not exist" containerID="98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.722343 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c"} err="failed to get container status \"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c\": rpc error: code = NotFound desc = could not find container \"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c\": container with ID starting with 98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c not found: ID does not exist" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.722366 4832 scope.go:117] "RemoveContainer" containerID="cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.722913 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596"} err="failed to get container status \"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596\": rpc error: code = NotFound desc = could not find container \"cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596\": container with ID starting with cfab7058f97e487db8a5bcae3e36fe33afd7b5ef1f39782c6516bd710694d596 not found: ID does not exist" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.722949 4832 scope.go:117] "RemoveContainer" containerID="98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.723853 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c"} err="failed to get container status \"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c\": rpc error: code = NotFound desc = could not find container \"98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c\": container with ID starting with 98d81a02de450690c8aab36fb500e8a2585b82aa08be166f6c7214fa661a451c not found: ID does not exist" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.801541 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.801491268 podStartE2EDuration="6.801491268s" podCreationTimestamp="2026-03-12 15:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:06:45.525827332 +0000 UTC m=+1164.169841558" watchObservedRunningTime="2026-03-12 15:06:45.801491268 +0000 UTC m=+1164.445505494" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.805591 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.811802 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.866919 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:45 crc kubenswrapper[4832]: E0312 15:06:45.867264 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-httpd" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.867280 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-httpd" Mar 12 15:06:45 crc kubenswrapper[4832]: E0312 15:06:45.867302 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-log" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.867308 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-log" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.867461 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-httpd" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.867488 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" containerName="glance-log" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.868355 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.875296 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.875534 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 15:06:45 crc kubenswrapper[4832]: I0312 15:06:45.907604 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.061955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-logs\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.062010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-scripts\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.062067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.062152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-config-data\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.062174 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.062192 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hwl\" (UniqueName: \"kubernetes.io/projected/f161113d-cd3a-4bb1-be7b-70ab01642c94-kube-api-access-m7hwl\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.062215 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.062237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-logs\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165315 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-scripts\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165430 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-config-data\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7hwl\" (UniqueName: \"kubernetes.io/projected/f161113d-cd3a-4bb1-be7b-70ab01642c94-kube-api-access-m7hwl\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.165525 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.166083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-logs\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.166258 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.166446 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.169749 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.173281 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-scripts\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.173925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.176512 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-config-data\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.180814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7hwl\" (UniqueName: \"kubernetes.io/projected/f161113d-cd3a-4bb1-be7b-70ab01642c94-kube-api-access-m7hwl\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.201449 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.493853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.507682 4832 generic.go:334] "Generic (PLEG): container finished" podID="4e955107-9355-4511-b7f3-6171b221d884" containerID="b82a8c4b0b054a192a4cf5b804079aad3e000cffc5cd3cc5114fede40c2fa717" exitCode=0 Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.507734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pnpk" event={"ID":"4e955107-9355-4511-b7f3-6171b221d884","Type":"ContainerDied","Data":"b82a8c4b0b054a192a4cf5b804079aad3e000cffc5cd3cc5114fede40c2fa717"} Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.511100 4832 generic.go:334] "Generic (PLEG): container finished" podID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerID="0ec5938165c3f96c222912ae475d24cfa26efcab8c801cfe1be4364abe748a08" exitCode=0 Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.511121 4832 generic.go:334] "Generic (PLEG): container finished" podID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerID="f039486d5968a082ee59bdbab60e30976379b46da04eb21bddf134ed51b78318" exitCode=143 Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.511155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da56c936-1396-41c7-a17b-bd5a3bb43c05","Type":"ContainerDied","Data":"0ec5938165c3f96c222912ae475d24cfa26efcab8c801cfe1be4364abe748a08"} Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.511172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da56c936-1396-41c7-a17b-bd5a3bb43c05","Type":"ContainerDied","Data":"f039486d5968a082ee59bdbab60e30976379b46da04eb21bddf134ed51b78318"} Mar 12 15:06:46 crc kubenswrapper[4832]: I0312 15:06:46.633647 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb63dfa-bae4-4b9e-a397-86733ad67149" path="/var/lib/kubelet/pods/9bb63dfa-bae4-4b9e-a397-86733ad67149/volumes" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.162776 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56ddd5f6f9-n8mxv"] Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.177396 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.181460 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-745cdbf99b-kdz5c"] Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.182965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.190292 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.216673 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-745cdbf99b-kdz5c"] Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.288346 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65d6478577-sl5mq"] Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.329285 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-combined-ca-bundle\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.329351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-config-data\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.329374 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-secret-key\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.329401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh5b5\" (UniqueName: \"kubernetes.io/projected/7b32181c-0268-4e3e-8b7b-f2811720ce58-kube-api-access-nh5b5\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.329426 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b32181c-0268-4e3e-8b7b-f2811720ce58-logs\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.329540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-scripts\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.329559 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-tls-certs\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.332559 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c5974b5d4-dhhm8"] Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.334375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.345085 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c5974b5d4-dhhm8"] Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.430625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b32181c-0268-4e3e-8b7b-f2811720ce58-logs\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.430746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-scripts\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.430768 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-tls-certs\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.430842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-combined-ca-bundle\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.431159 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b32181c-0268-4e3e-8b7b-f2811720ce58-logs\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.431290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-config-data\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.431307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-secret-key\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.431336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh5b5\" (UniqueName: \"kubernetes.io/projected/7b32181c-0268-4e3e-8b7b-f2811720ce58-kube-api-access-nh5b5\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.431552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-scripts\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.433618 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-config-data\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.439668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-secret-key\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.441190 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-combined-ca-bundle\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.451038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-tls-certs\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.451661 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh5b5\" (UniqueName: \"kubernetes.io/projected/7b32181c-0268-4e3e-8b7b-f2811720ce58-kube-api-access-nh5b5\") pod \"horizon-745cdbf99b-kdz5c\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.528868 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.534373 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06633b31-01e2-4a1c-bf9e-e74b157fba1d-scripts\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.534428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-horizon-secret-key\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.534445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06633b31-01e2-4a1c-bf9e-e74b157fba1d-config-data\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.534643 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-horizon-tls-certs\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.534718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06633b31-01e2-4a1c-bf9e-e74b157fba1d-logs\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.534777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-combined-ca-bundle\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.535045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnv6k\" (UniqueName: \"kubernetes.io/projected/06633b31-01e2-4a1c-bf9e-e74b157fba1d-kube-api-access-cnv6k\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.636081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06633b31-01e2-4a1c-bf9e-e74b157fba1d-scripts\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.636148 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-horizon-secret-key\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.636164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06633b31-01e2-4a1c-bf9e-e74b157fba1d-config-data\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.636226 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-horizon-tls-certs\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.636251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06633b31-01e2-4a1c-bf9e-e74b157fba1d-logs\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.636277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-combined-ca-bundle\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.636314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnv6k\" (UniqueName: \"kubernetes.io/projected/06633b31-01e2-4a1c-bf9e-e74b157fba1d-kube-api-access-cnv6k\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.637108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06633b31-01e2-4a1c-bf9e-e74b157fba1d-scripts\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.638011 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06633b31-01e2-4a1c-bf9e-e74b157fba1d-logs\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.638715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06633b31-01e2-4a1c-bf9e-e74b157fba1d-config-data\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.644350 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-horizon-secret-key\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.645020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-horizon-tls-certs\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.668146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnv6k\" (UniqueName: \"kubernetes.io/projected/06633b31-01e2-4a1c-bf9e-e74b157fba1d-kube-api-access-cnv6k\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.668745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06633b31-01e2-4a1c-bf9e-e74b157fba1d-combined-ca-bundle\") pod \"horizon-7c5974b5d4-dhhm8\" (UID: \"06633b31-01e2-4a1c-bf9e-e74b157fba1d\") " pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:49 crc kubenswrapper[4832]: I0312 15:06:49.956434 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:06:50 crc kubenswrapper[4832]: I0312 15:06:50.473679 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:06:50 crc kubenswrapper[4832]: I0312 15:06:50.525165 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-bqv5j"] Mar 12 15:06:50 crc kubenswrapper[4832]: I0312 15:06:50.525390 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="dnsmasq-dns" containerID="cri-o://8bbdd46e82c0779df5f7b1241f16735d46ff482acb3908d9c1f5bde24323ffa6" gracePeriod=10 Mar 12 15:06:51 crc kubenswrapper[4832]: I0312 15:06:51.583371 4832 generic.go:334] "Generic (PLEG): container finished" podID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerID="8bbdd46e82c0779df5f7b1241f16735d46ff482acb3908d9c1f5bde24323ffa6" exitCode=0 Mar 12 15:06:51 crc kubenswrapper[4832]: I0312 15:06:51.583448 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" event={"ID":"9fabd59a-046c-4afa-b884-f5a83cc91a53","Type":"ContainerDied","Data":"8bbdd46e82c0779df5f7b1241f16735d46ff482acb3908d9c1f5bde24323ffa6"} Mar 12 15:06:53 crc kubenswrapper[4832]: I0312 15:06:53.966625 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.127235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-combined-ca-bundle\") pod \"4e955107-9355-4511-b7f3-6171b221d884\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.127288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-scripts\") pod \"4e955107-9355-4511-b7f3-6171b221d884\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.127400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-fernet-keys\") pod \"4e955107-9355-4511-b7f3-6171b221d884\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.127440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlr7m\" (UniqueName: \"kubernetes.io/projected/4e955107-9355-4511-b7f3-6171b221d884-kube-api-access-jlr7m\") pod \"4e955107-9355-4511-b7f3-6171b221d884\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.127536 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-credential-keys\") pod \"4e955107-9355-4511-b7f3-6171b221d884\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.127613 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-config-data\") pod \"4e955107-9355-4511-b7f3-6171b221d884\" (UID: \"4e955107-9355-4511-b7f3-6171b221d884\") " Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.136673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e955107-9355-4511-b7f3-6171b221d884-kube-api-access-jlr7m" (OuterVolumeSpecName: "kube-api-access-jlr7m") pod "4e955107-9355-4511-b7f3-6171b221d884" (UID: "4e955107-9355-4511-b7f3-6171b221d884"). InnerVolumeSpecName "kube-api-access-jlr7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.137948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4e955107-9355-4511-b7f3-6171b221d884" (UID: "4e955107-9355-4511-b7f3-6171b221d884"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.138898 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-scripts" (OuterVolumeSpecName: "scripts") pod "4e955107-9355-4511-b7f3-6171b221d884" (UID: "4e955107-9355-4511-b7f3-6171b221d884"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.144690 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4e955107-9355-4511-b7f3-6171b221d884" (UID: "4e955107-9355-4511-b7f3-6171b221d884"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.162426 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e955107-9355-4511-b7f3-6171b221d884" (UID: "4e955107-9355-4511-b7f3-6171b221d884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.174175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-config-data" (OuterVolumeSpecName: "config-data") pod "4e955107-9355-4511-b7f3-6171b221d884" (UID: "4e955107-9355-4511-b7f3-6171b221d884"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.229797 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.229831 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.229841 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.229849 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.229858 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e955107-9355-4511-b7f3-6171b221d884-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.229866 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlr7m\" (UniqueName: \"kubernetes.io/projected/4e955107-9355-4511-b7f3-6171b221d884-kube-api-access-jlr7m\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.472484 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.617463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pnpk" event={"ID":"4e955107-9355-4511-b7f3-6171b221d884","Type":"ContainerDied","Data":"3d2e76ed71c732d624850e13ed167b96c0076ec27ff06ad6eeed15dde1df0ce7"} Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.617519 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d2e76ed71c732d624850e13ed167b96c0076ec27ff06ad6eeed15dde1df0ce7" Mar 12 15:06:54 crc kubenswrapper[4832]: I0312 15:06:54.617526 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pnpk" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.072235 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7pnpk"] Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.079456 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7pnpk"] Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.170906 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6psc2"] Mar 12 15:06:55 crc kubenswrapper[4832]: E0312 15:06:55.171313 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e955107-9355-4511-b7f3-6171b221d884" containerName="keystone-bootstrap" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.171328 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e955107-9355-4511-b7f3-6171b221d884" containerName="keystone-bootstrap" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.175191 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e955107-9355-4511-b7f3-6171b221d884" containerName="keystone-bootstrap" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.176087 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.178626 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.179303 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gszf" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.179303 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.180257 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.180429 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.186010 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6psc2"] Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.348623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-combined-ca-bundle\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.348783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q42g8\" (UniqueName: \"kubernetes.io/projected/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-kube-api-access-q42g8\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.348906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-credential-keys\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.348935 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-config-data\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.348960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-scripts\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.349059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-fernet-keys\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.450728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-combined-ca-bundle\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.450814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q42g8\" (UniqueName: \"kubernetes.io/projected/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-kube-api-access-q42g8\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.450884 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-credential-keys\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.450902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-config-data\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.451590 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-scripts\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.451631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-fernet-keys\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.457734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-combined-ca-bundle\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.458057 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-credential-keys\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.461644 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-config-data\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.462926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-fernet-keys\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.471967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-scripts\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.473223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q42g8\" (UniqueName: \"kubernetes.io/projected/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-kube-api-access-q42g8\") pod \"keystone-bootstrap-6psc2\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:55 crc kubenswrapper[4832]: I0312 15:06:55.533248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:06:56 crc kubenswrapper[4832]: I0312 15:06:56.314244 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:06:56 crc kubenswrapper[4832]: I0312 15:06:56.314297 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:06:56 crc kubenswrapper[4832]: I0312 15:06:56.633074 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e955107-9355-4511-b7f3-6171b221d884" path="/var/lib/kubelet/pods/4e955107-9355-4511-b7f3-6171b221d884/volumes" Mar 12 15:06:58 crc kubenswrapper[4832]: E0312 15:06:58.640250 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Mar 12 15:06:58 crc kubenswrapper[4832]: E0312 15:06:58.640938 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf4h5c8h8ch54fh544h66fh9bhdh5ffh6h589h5h57fhb5h558h54dhdch579h54bhf5h5ffh5f6h57ch5fdh5bdh4h56bh64bh686h56h58dhb8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbf6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65d6478577-sl5mq_openstack(348acb62-dfd9-46b8-9411-4706a2ca646f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:06:58 crc kubenswrapper[4832]: E0312 15:06:58.644650 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-65d6478577-sl5mq" podUID="348acb62-dfd9-46b8-9411-4706a2ca646f" Mar 12 15:06:58 crc kubenswrapper[4832]: E0312 15:06:58.651984 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Mar 12 15:06:58 crc kubenswrapper[4832]: E0312 15:06:58.652368 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n554h697h55fh87h59fh5fch4h587h58bh64bh5d5h87h695h6fh54dh59bh69h5c5h5bch5dch666h5fdh58ch5b4h684hcbhddh579h66bh5c8h5d6h74q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxf86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56ddd5f6f9-n8mxv_openstack(96736cea-2309-4ec2-b1b5-edb4b4aafb2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:06:58 crc kubenswrapper[4832]: E0312 15:06:58.655382 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-56ddd5f6f9-n8mxv" podUID="96736cea-2309-4ec2-b1b5-edb4b4aafb2b" Mar 12 15:06:59 crc kubenswrapper[4832]: I0312 15:06:59.471807 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.280656 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.280970 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjmb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-pz5qf_openstack(bdded1bd-9b32-465d-9226-618cf5d0e8bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.283603 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-pz5qf" podUID="bdded1bd-9b32-465d-9226-618cf5d0e8bb" Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.364707 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.364909 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h5f5h569h66fh578h54fh5b9h79h5fdh697hd7hb6h577h56fhf4h645h646h5cbh68fh8ch5bh658h5b9hdchb8hcbh5bfhd8h646hbch68bh698q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4mkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-549d6b9b97-cjxdq_openstack(19af684a-ef80-4214-89c5-4c184c4ca0d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.370749 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-549d6b9b97-cjxdq" podUID="19af684a-ef80-4214-89c5-4c184c4ca0d6" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.398424 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572105 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-config-data\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572204 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-scripts\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-combined-ca-bundle\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572444 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572623 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq8xl\" (UniqueName: \"kubernetes.io/projected/da56c936-1396-41c7-a17b-bd5a3bb43c05-kube-api-access-gq8xl\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-internal-tls-certs\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572737 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-logs\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.572774 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-httpd-run\") pod \"da56c936-1396-41c7-a17b-bd5a3bb43c05\" (UID: \"da56c936-1396-41c7-a17b-bd5a3bb43c05\") " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.573611 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.578926 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-logs" (OuterVolumeSpecName: "logs") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.579042 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-scripts" (OuterVolumeSpecName: "scripts") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.580628 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da56c936-1396-41c7-a17b-bd5a3bb43c05-kube-api-access-gq8xl" (OuterVolumeSpecName: "kube-api-access-gq8xl") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "kube-api-access-gq8xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.588722 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.599633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.629434 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.635747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-config-data" (OuterVolumeSpecName: "config-data") pod "da56c936-1396-41c7-a17b-bd5a3bb43c05" (UID: "da56c936-1396-41c7-a17b-bd5a3bb43c05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.671966 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675664 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq8xl\" (UniqueName: \"kubernetes.io/projected/da56c936-1396-41c7-a17b-bd5a3bb43c05-kube-api-access-gq8xl\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675691 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675703 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675714 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da56c936-1396-41c7-a17b-bd5a3bb43c05-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675724 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675733 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675742 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da56c936-1396-41c7-a17b-bd5a3bb43c05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.675763 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.676923 4832 generic.go:334] "Generic (PLEG): container finished" podID="40b2fcd2-d826-44f8-a9e1-125b17905fae" containerID="27d3c0f53caf72eaced680d7ed078fe786c01365274749f744a9ae440320d658" exitCode=0 Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.678759 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-pz5qf" podUID="bdded1bd-9b32-465d-9226-618cf5d0e8bb" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.706978 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da56c936-1396-41c7-a17b-bd5a3bb43c05","Type":"ContainerDied","Data":"1b4fc6f136659b276f4946b6ae231947e423acfa77f5140e52fc7544ea9027f9"} Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.707026 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sf9s6" event={"ID":"40b2fcd2-d826-44f8-a9e1-125b17905fae","Type":"ContainerDied","Data":"27d3c0f53caf72eaced680d7ed078fe786c01365274749f744a9ae440320d658"} Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.707139 4832 scope.go:117] "RemoveContainer" containerID="0ec5938165c3f96c222912ae475d24cfa26efcab8c801cfe1be4364abe748a08" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.707625 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.777456 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.790365 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.807550 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.817021 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.817468 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-log" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.817492 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-log" Mar 12 15:07:00 crc kubenswrapper[4832]: E0312 15:07:00.817540 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-httpd" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.817551 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-httpd" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.817756 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-log" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.817787 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" containerName="glance-httpd" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.818860 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.823830 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.824267 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.831420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.980732 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.980911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l42s\" (UniqueName: \"kubernetes.io/projected/81ebdaa9-c995-4936-9476-d996aa2532a9-kube-api-access-4l42s\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.980975 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.981121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.981163 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.981221 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.981243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:00 crc kubenswrapper[4832]: I0312 15:07:00.981326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082529 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082654 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082749 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l42s\" (UniqueName: \"kubernetes.io/projected/81ebdaa9-c995-4936-9476-d996aa2532a9-kube-api-access-4l42s\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082848 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.082898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.083146 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.083326 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.083934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.087077 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.087439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.087463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.096155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.100394 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l42s\" (UniqueName: \"kubernetes.io/projected/81ebdaa9-c995-4936-9476-d996aa2532a9-kube-api-access-4l42s\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.125415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:07:01 crc kubenswrapper[4832]: I0312 15:07:01.138949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:02 crc kubenswrapper[4832]: I0312 15:07:02.629403 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da56c936-1396-41c7-a17b-bd5a3bb43c05" path="/var/lib/kubelet/pods/da56c936-1396-41c7-a17b-bd5a3bb43c05/volumes" Mar 12 15:07:08 crc kubenswrapper[4832]: E0312 15:07:08.859342 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Mar 12 15:07:08 crc kubenswrapper[4832]: E0312 15:07:08.860081 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgrgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-brwnc_openstack(6f2adafe-55f5-4149-893d-bdf63ec5ef7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:07:08 crc kubenswrapper[4832]: E0312 15:07:08.861276 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-brwnc" podUID="6f2adafe-55f5-4149-893d-bdf63ec5ef7d" Mar 12 15:07:08 crc kubenswrapper[4832]: I0312 15:07:08.995369 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.052227 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.062317 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.073149 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.075763 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.151000 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/348acb62-dfd9-46b8-9411-4706a2ca646f-logs\") pod \"348acb62-dfd9-46b8-9411-4706a2ca646f\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.151121 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-combined-ca-bundle\") pod \"40b2fcd2-d826-44f8-a9e1-125b17905fae\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.151145 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-horizon-secret-key\") pod \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.151171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-nb\") pod \"9fabd59a-046c-4afa-b884-f5a83cc91a53\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.151380 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348acb62-dfd9-46b8-9411-4706a2ca646f-logs" (OuterVolumeSpecName: "logs") pod "348acb62-dfd9-46b8-9411-4706a2ca646f" (UID: "348acb62-dfd9-46b8-9411-4706a2ca646f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152021 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-config-data\") pod \"19af684a-ef80-4214-89c5-4c184c4ca0d6\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152076 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x7w6\" (UniqueName: \"kubernetes.io/projected/9fabd59a-046c-4afa-b884-f5a83cc91a53-kube-api-access-4x7w6\") pod \"9fabd59a-046c-4afa-b884-f5a83cc91a53\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152097 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-svc\") pod \"9fabd59a-046c-4afa-b884-f5a83cc91a53\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152121 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19af684a-ef80-4214-89c5-4c184c4ca0d6-logs\") pod \"19af684a-ef80-4214-89c5-4c184c4ca0d6\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152136 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-config\") pod \"9fabd59a-046c-4afa-b884-f5a83cc91a53\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19af684a-ef80-4214-89c5-4c184c4ca0d6-horizon-secret-key\") pod \"19af684a-ef80-4214-89c5-4c184c4ca0d6\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-config-data\") pod \"348acb62-dfd9-46b8-9411-4706a2ca646f\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152193 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/348acb62-dfd9-46b8-9411-4706a2ca646f-horizon-secret-key\") pod \"348acb62-dfd9-46b8-9411-4706a2ca646f\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152245 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-config-data\") pod \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152262 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-config\") pod \"40b2fcd2-d826-44f8-a9e1-125b17905fae\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152275 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj4lg\" (UniqueName: \"kubernetes.io/projected/40b2fcd2-d826-44f8-a9e1-125b17905fae-kube-api-access-qj4lg\") pod \"40b2fcd2-d826-44f8-a9e1-125b17905fae\" (UID: \"40b2fcd2-d826-44f8-a9e1-125b17905fae\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-scripts\") pod \"348acb62-dfd9-46b8-9411-4706a2ca646f\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxf86\" (UniqueName: \"kubernetes.io/projected/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-kube-api-access-bxf86\") pod \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-scripts\") pod \"19af684a-ef80-4214-89c5-4c184c4ca0d6\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152378 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbf6b\" (UniqueName: \"kubernetes.io/projected/348acb62-dfd9-46b8-9411-4706a2ca646f-kube-api-access-wbf6b\") pod \"348acb62-dfd9-46b8-9411-4706a2ca646f\" (UID: \"348acb62-dfd9-46b8-9411-4706a2ca646f\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152395 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-sb\") pod \"9fabd59a-046c-4afa-b884-f5a83cc91a53\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-scripts\") pod \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4mkl\" (UniqueName: \"kubernetes.io/projected/19af684a-ef80-4214-89c5-4c184c4ca0d6-kube-api-access-m4mkl\") pod \"19af684a-ef80-4214-89c5-4c184c4ca0d6\" (UID: \"19af684a-ef80-4214-89c5-4c184c4ca0d6\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-swift-storage-0\") pod \"9fabd59a-046c-4afa-b884-f5a83cc91a53\" (UID: \"9fabd59a-046c-4afa-b884-f5a83cc91a53\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.152484 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-logs\") pod \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\" (UID: \"96736cea-2309-4ec2-b1b5-edb4b4aafb2b\") " Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.153087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-config-data" (OuterVolumeSpecName: "config-data") pod "348acb62-dfd9-46b8-9411-4706a2ca646f" (UID: "348acb62-dfd9-46b8-9411-4706a2ca646f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.153373 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-logs" (OuterVolumeSpecName: "logs") pod "96736cea-2309-4ec2-b1b5-edb4b4aafb2b" (UID: "96736cea-2309-4ec2-b1b5-edb4b4aafb2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.153777 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.153796 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.153805 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/348acb62-dfd9-46b8-9411-4706a2ca646f-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.153991 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19af684a-ef80-4214-89c5-4c184c4ca0d6-logs" (OuterVolumeSpecName: "logs") pod "19af684a-ef80-4214-89c5-4c184c4ca0d6" (UID: "19af684a-ef80-4214-89c5-4c184c4ca0d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.154790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-config-data" (OuterVolumeSpecName: "config-data") pod "19af684a-ef80-4214-89c5-4c184c4ca0d6" (UID: "19af684a-ef80-4214-89c5-4c184c4ca0d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.156157 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-scripts" (OuterVolumeSpecName: "scripts") pod "348acb62-dfd9-46b8-9411-4706a2ca646f" (UID: "348acb62-dfd9-46b8-9411-4706a2ca646f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.156197 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-scripts" (OuterVolumeSpecName: "scripts") pod "19af684a-ef80-4214-89c5-4c184c4ca0d6" (UID: "19af684a-ef80-4214-89c5-4c184c4ca0d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.156344 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-scripts" (OuterVolumeSpecName: "scripts") pod "96736cea-2309-4ec2-b1b5-edb4b4aafb2b" (UID: "96736cea-2309-4ec2-b1b5-edb4b4aafb2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.156837 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "96736cea-2309-4ec2-b1b5-edb4b4aafb2b" (UID: "96736cea-2309-4ec2-b1b5-edb4b4aafb2b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.159322 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348acb62-dfd9-46b8-9411-4706a2ca646f-kube-api-access-wbf6b" (OuterVolumeSpecName: "kube-api-access-wbf6b") pod "348acb62-dfd9-46b8-9411-4706a2ca646f" (UID: "348acb62-dfd9-46b8-9411-4706a2ca646f"). InnerVolumeSpecName "kube-api-access-wbf6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.159645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b2fcd2-d826-44f8-a9e1-125b17905fae-kube-api-access-qj4lg" (OuterVolumeSpecName: "kube-api-access-qj4lg") pod "40b2fcd2-d826-44f8-a9e1-125b17905fae" (UID: "40b2fcd2-d826-44f8-a9e1-125b17905fae"). InnerVolumeSpecName "kube-api-access-qj4lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.160109 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-config-data" (OuterVolumeSpecName: "config-data") pod "96736cea-2309-4ec2-b1b5-edb4b4aafb2b" (UID: "96736cea-2309-4ec2-b1b5-edb4b4aafb2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.175380 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348acb62-dfd9-46b8-9411-4706a2ca646f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "348acb62-dfd9-46b8-9411-4706a2ca646f" (UID: "348acb62-dfd9-46b8-9411-4706a2ca646f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.175860 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19af684a-ef80-4214-89c5-4c184c4ca0d6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "19af684a-ef80-4214-89c5-4c184c4ca0d6" (UID: "19af684a-ef80-4214-89c5-4c184c4ca0d6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.176294 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fabd59a-046c-4afa-b884-f5a83cc91a53-kube-api-access-4x7w6" (OuterVolumeSpecName: "kube-api-access-4x7w6") pod "9fabd59a-046c-4afa-b884-f5a83cc91a53" (UID: "9fabd59a-046c-4afa-b884-f5a83cc91a53"). InnerVolumeSpecName "kube-api-access-4x7w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.176353 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-kube-api-access-bxf86" (OuterVolumeSpecName: "kube-api-access-bxf86") pod "96736cea-2309-4ec2-b1b5-edb4b4aafb2b" (UID: "96736cea-2309-4ec2-b1b5-edb4b4aafb2b"). InnerVolumeSpecName "kube-api-access-bxf86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.186345 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40b2fcd2-d826-44f8-a9e1-125b17905fae" (UID: "40b2fcd2-d826-44f8-a9e1-125b17905fae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.191365 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19af684a-ef80-4214-89c5-4c184c4ca0d6-kube-api-access-m4mkl" (OuterVolumeSpecName: "kube-api-access-m4mkl") pod "19af684a-ef80-4214-89c5-4c184c4ca0d6" (UID: "19af684a-ef80-4214-89c5-4c184c4ca0d6"). InnerVolumeSpecName "kube-api-access-m4mkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.214030 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fabd59a-046c-4afa-b884-f5a83cc91a53" (UID: "9fabd59a-046c-4afa-b884-f5a83cc91a53"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.232093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fabd59a-046c-4afa-b884-f5a83cc91a53" (UID: "9fabd59a-046c-4afa-b884-f5a83cc91a53"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.233087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fabd59a-046c-4afa-b884-f5a83cc91a53" (UID: "9fabd59a-046c-4afa-b884-f5a83cc91a53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.238721 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-config" (OuterVolumeSpecName: "config") pod "40b2fcd2-d826-44f8-a9e1-125b17905fae" (UID: "40b2fcd2-d826-44f8-a9e1-125b17905fae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.246868 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-config" (OuterVolumeSpecName: "config") pod "9fabd59a-046c-4afa-b884-f5a83cc91a53" (UID: "9fabd59a-046c-4afa-b884-f5a83cc91a53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.254948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9fabd59a-046c-4afa-b884-f5a83cc91a53" (UID: "9fabd59a-046c-4afa-b884-f5a83cc91a53"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.255899 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.255930 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4mkl\" (UniqueName: \"kubernetes.io/projected/19af684a-ef80-4214-89c5-4c184c4ca0d6-kube-api-access-m4mkl\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.255942 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.255955 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.255967 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.255980 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.255991 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256001 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x7w6\" (UniqueName: \"kubernetes.io/projected/9fabd59a-046c-4afa-b884-f5a83cc91a53-kube-api-access-4x7w6\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256011 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256020 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19af684a-ef80-4214-89c5-4c184c4ca0d6-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256030 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256039 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19af684a-ef80-4214-89c5-4c184c4ca0d6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256050 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/348acb62-dfd9-46b8-9411-4706a2ca646f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256060 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256069 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/40b2fcd2-d826-44f8-a9e1-125b17905fae-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256079 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj4lg\" (UniqueName: \"kubernetes.io/projected/40b2fcd2-d826-44f8-a9e1-125b17905fae-kube-api-access-qj4lg\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256089 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348acb62-dfd9-46b8-9411-4706a2ca646f-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256099 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxf86\" (UniqueName: \"kubernetes.io/projected/96736cea-2309-4ec2-b1b5-edb4b4aafb2b-kube-api-access-bxf86\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256109 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19af684a-ef80-4214-89c5-4c184c4ca0d6-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256119 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbf6b\" (UniqueName: \"kubernetes.io/projected/348acb62-dfd9-46b8-9411-4706a2ca646f-kube-api-access-wbf6b\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.256132 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fabd59a-046c-4afa-b884-f5a83cc91a53-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.462204 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.472751 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.472852 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.757386 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56ddd5f6f9-n8mxv" event={"ID":"96736cea-2309-4ec2-b1b5-edb4b4aafb2b","Type":"ContainerDied","Data":"4d3431a32494331a4c392a90d4f7d122b1676b2f742d2411e0e3669e9a85f5b2"} Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.757489 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ddd5f6f9-n8mxv" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.759295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d6478577-sl5mq" event={"ID":"348acb62-dfd9-46b8-9411-4706a2ca646f","Type":"ContainerDied","Data":"159655f2351d02b1a69abc7d44c1b1133adee6c772811345adb30d82012af24c"} Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.759354 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d6478577-sl5mq" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.761964 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549d6b9b97-cjxdq" event={"ID":"19af684a-ef80-4214-89c5-4c184c4ca0d6","Type":"ContainerDied","Data":"60de4d264e831749f5814cb9c2787b88a0191e0a5aa966e17d094396a073c272"} Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.762024 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549d6b9b97-cjxdq" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.764828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" event={"ID":"9fabd59a-046c-4afa-b884-f5a83cc91a53","Type":"ContainerDied","Data":"1fd47d324368c6398b3c2f572e00180eb8278b4779945486d96d83ea2825de1a"} Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.764885 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-bqv5j" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.766271 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sf9s6" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.767600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sf9s6" event={"ID":"40b2fcd2-d826-44f8-a9e1-125b17905fae","Type":"ContainerDied","Data":"ec6a9ee63cef22e7977b58db77bee7d24819a23e435f5d97254ec43564c1c3bc"} Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.767635 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6a9ee63cef22e7977b58db77bee7d24819a23e435f5d97254ec43564c1c3bc" Mar 12 15:07:09 crc kubenswrapper[4832]: E0312 15:07:09.768474 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-brwnc" podUID="6f2adafe-55f5-4149-893d-bdf63ec5ef7d" Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.884049 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65d6478577-sl5mq"] Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.891299 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65d6478577-sl5mq"] Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.943680 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-549d6b9b97-cjxdq"] Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.952468 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-549d6b9b97-cjxdq"] Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.963559 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-bqv5j"] Mar 12 15:07:09 crc kubenswrapper[4832]: I0312 15:07:09.970376 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-bqv5j"] Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.008940 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56ddd5f6f9-n8mxv"] Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.015903 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56ddd5f6f9-n8mxv"] Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.294388 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-nbwkf"] Mar 12 15:07:10 crc kubenswrapper[4832]: E0312 15:07:10.294905 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b2fcd2-d826-44f8-a9e1-125b17905fae" containerName="neutron-db-sync" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.294948 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b2fcd2-d826-44f8-a9e1-125b17905fae" containerName="neutron-db-sync" Mar 12 15:07:10 crc kubenswrapper[4832]: E0312 15:07:10.294970 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="dnsmasq-dns" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.294981 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="dnsmasq-dns" Mar 12 15:07:10 crc kubenswrapper[4832]: E0312 15:07:10.295030 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="init" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.295044 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="init" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.295335 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b2fcd2-d826-44f8-a9e1-125b17905fae" containerName="neutron-db-sync" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.295366 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" containerName="dnsmasq-dns" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.296697 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.314214 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-nbwkf"] Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.379730 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8ccdd85bd-b4bf5"] Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.381191 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.383432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.383542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngkft\" (UniqueName: \"kubernetes.io/projected/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-kube-api-access-ngkft\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.383585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-config\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.383628 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.383665 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-svc\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.383737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.383987 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vrq42" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.384118 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.385568 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.385707 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.389435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8ccdd85bd-b4bf5"] Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.484870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-svc\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.484921 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-httpd-config\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.484942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxp8w\" (UniqueName: \"kubernetes.io/projected/97e7d696-8bfc-49ec-ae8c-6061b9813d16-kube-api-access-wxp8w\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.484981 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-ovndb-tls-certs\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-config\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-combined-ca-bundle\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngkft\" (UniqueName: \"kubernetes.io/projected/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-kube-api-access-ngkft\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-config\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-svc\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.485868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.486547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.486563 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.486696 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-config\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.524318 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngkft\" (UniqueName: \"kubernetes.io/projected/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-kube-api-access-ngkft\") pod \"dnsmasq-dns-6b7b667979-nbwkf\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.586523 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxp8w\" (UniqueName: \"kubernetes.io/projected/97e7d696-8bfc-49ec-ae8c-6061b9813d16-kube-api-access-wxp8w\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.586574 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-httpd-config\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.586621 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-ovndb-tls-certs\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.586706 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-config\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.586753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-combined-ca-bundle\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.591219 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-httpd-config\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.591310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-ovndb-tls-certs\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.591769 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-config\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.592560 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-combined-ca-bundle\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.602491 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxp8w\" (UniqueName: \"kubernetes.io/projected/97e7d696-8bfc-49ec-ae8c-6061b9813d16-kube-api-access-wxp8w\") pod \"neutron-8ccdd85bd-b4bf5\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.631421 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19af684a-ef80-4214-89c5-4c184c4ca0d6" path="/var/lib/kubelet/pods/19af684a-ef80-4214-89c5-4c184c4ca0d6/volumes" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.631873 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348acb62-dfd9-46b8-9411-4706a2ca646f" path="/var/lib/kubelet/pods/348acb62-dfd9-46b8-9411-4706a2ca646f/volumes" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.632296 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96736cea-2309-4ec2-b1b5-edb4b4aafb2b" path="/var/lib/kubelet/pods/96736cea-2309-4ec2-b1b5-edb4b4aafb2b/volumes" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.632661 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fabd59a-046c-4afa-b884-f5a83cc91a53" path="/var/lib/kubelet/pods/9fabd59a-046c-4afa-b884-f5a83cc91a53/volumes" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.633739 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.709643 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:10 crc kubenswrapper[4832]: I0312 15:07:10.922206 4832 scope.go:117] "RemoveContainer" containerID="f039486d5968a082ee59bdbab60e30976379b46da04eb21bddf134ed51b78318" Mar 12 15:07:10 crc kubenswrapper[4832]: E0312 15:07:10.929834 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 12 15:07:10 crc kubenswrapper[4832]: E0312 15:07:10.930018 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jdjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2hkzk_openstack(8a7d0054-4697-4cbb-bc50-18024fc3bfbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:07:10 crc kubenswrapper[4832]: W0312 15:07:10.930753 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf161113d_cd3a_4bb1_be7b_70ab01642c94.slice/crio-0c4ec0f2858fb15e8f5b54de7071a84b2c729679203dbd1859f0a3de35c17c69 WatchSource:0}: Error finding container 0c4ec0f2858fb15e8f5b54de7071a84b2c729679203dbd1859f0a3de35c17c69: Status 404 returned error can't find the container with id 0c4ec0f2858fb15e8f5b54de7071a84b2c729679203dbd1859f0a3de35c17c69 Mar 12 15:07:10 crc kubenswrapper[4832]: E0312 15:07:10.931365 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2hkzk" podUID="8a7d0054-4697-4cbb-bc50-18024fc3bfbc" Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.181358 4832 scope.go:117] "RemoveContainer" containerID="8bbdd46e82c0779df5f7b1241f16735d46ff482acb3908d9c1f5bde24323ffa6" Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.228704 4832 scope.go:117] "RemoveContainer" containerID="e3c5af13b0e15840d0ce592aee560451f86670a0fc2563dc381e8149c98d58fb" Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.426428 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c5974b5d4-dhhm8"] Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.543470 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6psc2"] Mar 12 15:07:11 crc kubenswrapper[4832]: W0312 15:07:11.553664 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1282c2b_8bd5_4beb_a929_b86c1ae950a6.slice/crio-b29d62034774e71e46a4d84178b0a943835a83adcf94f758fc698870b2e8c86e WatchSource:0}: Error finding container b29d62034774e71e46a4d84178b0a943835a83adcf94f758fc698870b2e8c86e: Status 404 returned error can't find the container with id b29d62034774e71e46a4d84178b0a943835a83adcf94f758fc698870b2e8c86e Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.740263 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-745cdbf99b-kdz5c"] Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.799105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerStarted","Data":"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713"} Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.802048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f161113d-cd3a-4bb1-be7b-70ab01642c94","Type":"ContainerStarted","Data":"0c4ec0f2858fb15e8f5b54de7071a84b2c729679203dbd1859f0a3de35c17c69"} Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.804282 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c5974b5d4-dhhm8" event={"ID":"06633b31-01e2-4a1c-bf9e-e74b157fba1d","Type":"ContainerStarted","Data":"4973f295e26fed952354b17c82a03dc330d4f909f3c56c4408eb157b6911802f"} Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.806369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6psc2" event={"ID":"f1282c2b-8bd5-4beb-a929-b86c1ae950a6","Type":"ContainerStarted","Data":"b29d62034774e71e46a4d84178b0a943835a83adcf94f758fc698870b2e8c86e"} Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.808377 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-745cdbf99b-kdz5c" event={"ID":"7b32181c-0268-4e3e-8b7b-f2811720ce58","Type":"ContainerStarted","Data":"c177d8f12dd5cfecd3b8f7a7a009cb40de2aa436f2b0d0856655efe6c17bace5"} Mar 12 15:07:11 crc kubenswrapper[4832]: E0312 15:07:11.809293 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2hkzk" podUID="8a7d0054-4697-4cbb-bc50-18024fc3bfbc" Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.845649 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8ccdd85bd-b4bf5"] Mar 12 15:07:11 crc kubenswrapper[4832]: I0312 15:07:11.881317 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-nbwkf"] Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.433754 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8656449bc9-dm7zf"] Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.435345 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.437399 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.437741 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.468223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8656449bc9-dm7zf"] Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.536585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-internal-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.536635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-combined-ca-bundle\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.536689 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-config\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.536724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-ovndb-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.536763 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-httpd-config\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.536786 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvm85\" (UniqueName: \"kubernetes.io/projected/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-kube-api-access-zvm85\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.536804 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-public-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.540037 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.640408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-public-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.640539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-internal-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.640565 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-combined-ca-bundle\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.640615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-config\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.640650 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-ovndb-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.640686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-httpd-config\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.640706 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvm85\" (UniqueName: \"kubernetes.io/projected/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-kube-api-access-zvm85\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.648990 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-public-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.649037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-config\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.653898 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-ovndb-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.653977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-httpd-config\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.655632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-internal-tls-certs\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.693267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvm85\" (UniqueName: \"kubernetes.io/projected/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-kube-api-access-zvm85\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.702350 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-combined-ca-bundle\") pod \"neutron-8656449bc9-dm7zf\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.802932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.867722 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-745cdbf99b-kdz5c" event={"ID":"7b32181c-0268-4e3e-8b7b-f2811720ce58","Type":"ContainerStarted","Data":"ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.867768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-745cdbf99b-kdz5c" event={"ID":"7b32181c-0268-4e3e-8b7b-f2811720ce58","Type":"ContainerStarted","Data":"7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.903829 4832 generic.go:334] "Generic (PLEG): container finished" podID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerID="cc292c3054618677f5e476ef1fc1082bfb575bca8a861212a8c0ad4ea3b11ae2" exitCode=0 Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.903899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" event={"ID":"4ef725f8-44da-46ad-8a0d-e5eed8cd6106","Type":"ContainerDied","Data":"cc292c3054618677f5e476ef1fc1082bfb575bca8a861212a8c0ad4ea3b11ae2"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.903921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" event={"ID":"4ef725f8-44da-46ad-8a0d-e5eed8cd6106","Type":"ContainerStarted","Data":"e442ea0b2d09641d4702b5822c5ed6f8bb9729b7b6a6ec2777b7f041a397b306"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.938131 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-745cdbf99b-kdz5c" podStartSLOduration=23.504054009 podStartE2EDuration="23.93811684s" podCreationTimestamp="2026-03-12 15:06:49 +0000 UTC" firstStartedPulling="2026-03-12 15:07:11.757940155 +0000 UTC m=+1190.401954381" lastFinishedPulling="2026-03-12 15:07:12.192002986 +0000 UTC m=+1190.836017212" observedRunningTime="2026-03-12 15:07:12.935053861 +0000 UTC m=+1191.579068097" watchObservedRunningTime="2026-03-12 15:07:12.93811684 +0000 UTC m=+1191.582131066" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.955567 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ccdd85bd-b4bf5" event={"ID":"97e7d696-8bfc-49ec-ae8c-6061b9813d16","Type":"ContainerStarted","Data":"7c65c318197a5e7c198b9634dd571c80a048fca8dcedb43f84b073eee536eef5"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.955610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ccdd85bd-b4bf5" event={"ID":"97e7d696-8bfc-49ec-ae8c-6061b9813d16","Type":"ContainerStarted","Data":"81a8742ae704c0476e70eb876bdb98687dad42e72b8c6dcb42fdaf3a7dab5eb3"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.955621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ccdd85bd-b4bf5" event={"ID":"97e7d696-8bfc-49ec-ae8c-6061b9813d16","Type":"ContainerStarted","Data":"ccb417a2b2ea575668c3b6157b67b194b2f6e11776a6cc0974aa342d5eb24461"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.955889 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.984618 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f161113d-cd3a-4bb1-be7b-70ab01642c94","Type":"ContainerStarted","Data":"1edde3f067ddd1e575eb64ef71b6252f98d05d71dd12ad01710b0231394a9e25"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.984711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f161113d-cd3a-4bb1-be7b-70ab01642c94","Type":"ContainerStarted","Data":"105ae678e8e254f80279a8698898ab1d4e50e09b276858c8d2be517d85ab0bd4"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.985168 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-log" containerID="cri-o://105ae678e8e254f80279a8698898ab1d4e50e09b276858c8d2be517d85ab0bd4" gracePeriod=30 Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.986471 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-httpd" containerID="cri-o://1edde3f067ddd1e575eb64ef71b6252f98d05d71dd12ad01710b0231394a9e25" gracePeriod=30 Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.997966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c5974b5d4-dhhm8" event={"ID":"06633b31-01e2-4a1c-bf9e-e74b157fba1d","Type":"ContainerStarted","Data":"65fd159df34d94fcc47609c9c7ad30f631abefdc1c72fae976802d0d633fae83"} Mar 12 15:07:12 crc kubenswrapper[4832]: I0312 15:07:12.998286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c5974b5d4-dhhm8" event={"ID":"06633b31-01e2-4a1c-bf9e-e74b157fba1d","Type":"ContainerStarted","Data":"22b15c195a91b1780eb3ccbcc0185e5779caad80e9adda0e42a258c2d6638c72"} Mar 12 15:07:13 crc kubenswrapper[4832]: I0312 15:07:13.002157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ebdaa9-c995-4936-9476-d996aa2532a9","Type":"ContainerStarted","Data":"2123a69629ca06fe4266fdbeb7fc66db7a84fdd7e7467e762db9b29ecbcb9079"} Mar 12 15:07:13 crc kubenswrapper[4832]: I0312 15:07:13.004084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6psc2" event={"ID":"f1282c2b-8bd5-4beb-a929-b86c1ae950a6","Type":"ContainerStarted","Data":"0acff1ea10ef3df7b6acf3e00e19b2bd07b63360c8b001c9bb8ced8793dc927a"} Mar 12 15:07:13 crc kubenswrapper[4832]: I0312 15:07:13.091277 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8ccdd85bd-b4bf5" podStartSLOduration=3.091260863 podStartE2EDuration="3.091260863s" podCreationTimestamp="2026-03-12 15:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:13.071751787 +0000 UTC m=+1191.715766013" watchObservedRunningTime="2026-03-12 15:07:13.091260863 +0000 UTC m=+1191.735275089" Mar 12 15:07:13 crc kubenswrapper[4832]: I0312 15:07:13.119243 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.119228284 podStartE2EDuration="28.119228284s" podCreationTimestamp="2026-03-12 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:13.115256949 +0000 UTC m=+1191.759271175" watchObservedRunningTime="2026-03-12 15:07:13.119228284 +0000 UTC m=+1191.763242510" Mar 12 15:07:13 crc kubenswrapper[4832]: I0312 15:07:13.140165 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6psc2" podStartSLOduration=18.140147741 podStartE2EDuration="18.140147741s" podCreationTimestamp="2026-03-12 15:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:13.136906087 +0000 UTC m=+1191.780920313" watchObservedRunningTime="2026-03-12 15:07:13.140147741 +0000 UTC m=+1191.784161967" Mar 12 15:07:13 crc kubenswrapper[4832]: I0312 15:07:13.181342 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c5974b5d4-dhhm8" podStartSLOduration=23.620018763 podStartE2EDuration="24.181324376s" podCreationTimestamp="2026-03-12 15:06:49 +0000 UTC" firstStartedPulling="2026-03-12 15:07:11.436025456 +0000 UTC m=+1190.080039682" lastFinishedPulling="2026-03-12 15:07:11.997331069 +0000 UTC m=+1190.641345295" observedRunningTime="2026-03-12 15:07:13.171042487 +0000 UTC m=+1191.815056713" watchObservedRunningTime="2026-03-12 15:07:13.181324376 +0000 UTC m=+1191.825338602" Mar 12 15:07:13 crc kubenswrapper[4832]: I0312 15:07:13.660056 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8656449bc9-dm7zf"] Mar 12 15:07:13 crc kubenswrapper[4832]: W0312 15:07:13.700590 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dd2bbb8_22a1_48e3_b4fd_7869bb93e499.slice/crio-c4f4c31a977e475c6caf914359a33c7025cc8c189960abb5cced5b08e9a73405 WatchSource:0}: Error finding container c4f4c31a977e475c6caf914359a33c7025cc8c189960abb5cced5b08e9a73405: Status 404 returned error can't find the container with id c4f4c31a977e475c6caf914359a33c7025cc8c189960abb5cced5b08e9a73405 Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.018884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pz5qf" event={"ID":"bdded1bd-9b32-465d-9226-618cf5d0e8bb","Type":"ContainerStarted","Data":"5ce37d8893e2ae7f44f7ebba50d87cf8c314028503b58201f32899347b55e391"} Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.045929 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-pz5qf" podStartSLOduration=2.635846028 podStartE2EDuration="35.045913695s" podCreationTimestamp="2026-03-12 15:06:39 +0000 UTC" firstStartedPulling="2026-03-12 15:06:41.272485619 +0000 UTC m=+1159.916499845" lastFinishedPulling="2026-03-12 15:07:13.682553286 +0000 UTC m=+1192.326567512" observedRunningTime="2026-03-12 15:07:14.042839746 +0000 UTC m=+1192.686853972" watchObservedRunningTime="2026-03-12 15:07:14.045913695 +0000 UTC m=+1192.689927921" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.067894 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656449bc9-dm7zf" event={"ID":"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499","Type":"ContainerStarted","Data":"62f772491df4d89fdff5a4ac560ce1fb3ab77ab702b56b95053385553f2ed0ee"} Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.067937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656449bc9-dm7zf" event={"ID":"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499","Type":"ContainerStarted","Data":"c4f4c31a977e475c6caf914359a33c7025cc8c189960abb5cced5b08e9a73405"} Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.074557 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" event={"ID":"4ef725f8-44da-46ad-8a0d-e5eed8cd6106","Type":"ContainerStarted","Data":"c605f235ece560d14049a5912a947372f14557898d2aa9762c079ed01f9ed2e8"} Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.075039 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.078785 4832 generic.go:334] "Generic (PLEG): container finished" podID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerID="1edde3f067ddd1e575eb64ef71b6252f98d05d71dd12ad01710b0231394a9e25" exitCode=0 Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.078836 4832 generic.go:334] "Generic (PLEG): container finished" podID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerID="105ae678e8e254f80279a8698898ab1d4e50e09b276858c8d2be517d85ab0bd4" exitCode=143 Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.078917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f161113d-cd3a-4bb1-be7b-70ab01642c94","Type":"ContainerDied","Data":"1edde3f067ddd1e575eb64ef71b6252f98d05d71dd12ad01710b0231394a9e25"} Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.078954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f161113d-cd3a-4bb1-be7b-70ab01642c94","Type":"ContainerDied","Data":"105ae678e8e254f80279a8698898ab1d4e50e09b276858c8d2be517d85ab0bd4"} Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.104261 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" podStartSLOduration=4.104245837 podStartE2EDuration="4.104245837s" podCreationTimestamp="2026-03-12 15:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:14.101255341 +0000 UTC m=+1192.745269577" watchObservedRunningTime="2026-03-12 15:07:14.104245837 +0000 UTC m=+1192.748260053" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.108881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ebdaa9-c995-4936-9476-d996aa2532a9","Type":"ContainerStarted","Data":"157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0"} Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.520752 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.664695 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-config-data\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.665371 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7hwl\" (UniqueName: \"kubernetes.io/projected/f161113d-cd3a-4bb1-be7b-70ab01642c94-kube-api-access-m7hwl\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.665475 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-scripts\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.665533 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.665604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-logs\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.665627 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-httpd-run\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.665675 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-combined-ca-bundle\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.665744 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-public-tls-certs\") pod \"f161113d-cd3a-4bb1-be7b-70ab01642c94\" (UID: \"f161113d-cd3a-4bb1-be7b-70ab01642c94\") " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.666687 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.667232 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-logs" (OuterVolumeSpecName: "logs") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.680010 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f161113d-cd3a-4bb1-be7b-70ab01642c94-kube-api-access-m7hwl" (OuterVolumeSpecName: "kube-api-access-m7hwl") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "kube-api-access-m7hwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.680103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-scripts" (OuterVolumeSpecName: "scripts") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.682854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.691243 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.722107 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.727896 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-config-data" (OuterVolumeSpecName: "config-data") pod "f161113d-cd3a-4bb1-be7b-70ab01642c94" (UID: "f161113d-cd3a-4bb1-be7b-70ab01642c94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767434 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767479 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7hwl\" (UniqueName: \"kubernetes.io/projected/f161113d-cd3a-4bb1-be7b-70ab01642c94-kube-api-access-m7hwl\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767493 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767559 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767576 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767589 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f161113d-cd3a-4bb1-be7b-70ab01642c94-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767600 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.767611 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f161113d-cd3a-4bb1-be7b-70ab01642c94-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.787972 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 12 15:07:14 crc kubenswrapper[4832]: I0312 15:07:14.868806 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.118403 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f161113d-cd3a-4bb1-be7b-70ab01642c94","Type":"ContainerDied","Data":"0c4ec0f2858fb15e8f5b54de7071a84b2c729679203dbd1859f0a3de35c17c69"} Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.119026 4832 scope.go:117] "RemoveContainer" containerID="1edde3f067ddd1e575eb64ef71b6252f98d05d71dd12ad01710b0231394a9e25" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.118425 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.120641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ebdaa9-c995-4936-9476-d996aa2532a9","Type":"ContainerStarted","Data":"a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044"} Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.125813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656449bc9-dm7zf" event={"ID":"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499","Type":"ContainerStarted","Data":"ae1039c0ad29e047c50e8ac376ed0cdbda2cd7d7a7d6a0a65cef98a14d8b2455"} Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.125858 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.150896 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.150874239 podStartE2EDuration="15.150874239s" podCreationTimestamp="2026-03-12 15:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:15.147764098 +0000 UTC m=+1193.791778334" watchObservedRunningTime="2026-03-12 15:07:15.150874239 +0000 UTC m=+1193.794888475" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.166954 4832 scope.go:117] "RemoveContainer" containerID="105ae678e8e254f80279a8698898ab1d4e50e09b276858c8d2be517d85ab0bd4" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.177636 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8656449bc9-dm7zf" podStartSLOduration=3.177619115 podStartE2EDuration="3.177619115s" podCreationTimestamp="2026-03-12 15:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:15.170079166 +0000 UTC m=+1193.814093412" watchObservedRunningTime="2026-03-12 15:07:15.177619115 +0000 UTC m=+1193.821633341" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.222692 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.253716 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.263534 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:07:15 crc kubenswrapper[4832]: E0312 15:07:15.264186 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-log" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.264216 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-log" Mar 12 15:07:15 crc kubenswrapper[4832]: E0312 15:07:15.264248 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-httpd" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.264259 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-httpd" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.264475 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-log" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.264557 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" containerName="glance-httpd" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.266646 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.270222 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.270406 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.281604 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.380981 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.381027 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.381074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.381106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.381140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-logs\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.381196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.381213 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.381255 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4bqr\" (UniqueName: \"kubernetes.io/projected/d0e23234-0687-46a1-8c0f-823c92c0aebc-kube-api-access-j4bqr\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482177 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4bqr\" (UniqueName: \"kubernetes.io/projected/d0e23234-0687-46a1-8c0f-823c92c0aebc-kube-api-access-j4bqr\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482268 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-logs\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482661 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.482704 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-logs\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.483165 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.488561 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.499675 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.500417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.503535 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.506616 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4bqr\" (UniqueName: \"kubernetes.io/projected/d0e23234-0687-46a1-8c0f-823c92c0aebc-kube-api-access-j4bqr\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.524089 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " pod="openstack/glance-default-external-api-0" Mar 12 15:07:15 crc kubenswrapper[4832]: I0312 15:07:15.588618 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:07:16 crc kubenswrapper[4832]: I0312 15:07:16.098311 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:07:16 crc kubenswrapper[4832]: I0312 15:07:16.134715 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0e23234-0687-46a1-8c0f-823c92c0aebc","Type":"ContainerStarted","Data":"bc6e741fc34569080e14216765942a84331519123480684933d69d999c6c8e3d"} Mar 12 15:07:16 crc kubenswrapper[4832]: I0312 15:07:16.140208 4832 generic.go:334] "Generic (PLEG): container finished" podID="bdded1bd-9b32-465d-9226-618cf5d0e8bb" containerID="5ce37d8893e2ae7f44f7ebba50d87cf8c314028503b58201f32899347b55e391" exitCode=0 Mar 12 15:07:16 crc kubenswrapper[4832]: I0312 15:07:16.140260 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pz5qf" event={"ID":"bdded1bd-9b32-465d-9226-618cf5d0e8bb","Type":"ContainerDied","Data":"5ce37d8893e2ae7f44f7ebba50d87cf8c314028503b58201f32899347b55e391"} Mar 12 15:07:16 crc kubenswrapper[4832]: I0312 15:07:16.143107 4832 generic.go:334] "Generic (PLEG): container finished" podID="f1282c2b-8bd5-4beb-a929-b86c1ae950a6" containerID="0acff1ea10ef3df7b6acf3e00e19b2bd07b63360c8b001c9bb8ced8793dc927a" exitCode=0 Mar 12 15:07:16 crc kubenswrapper[4832]: I0312 15:07:16.143157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6psc2" event={"ID":"f1282c2b-8bd5-4beb-a929-b86c1ae950a6","Type":"ContainerDied","Data":"0acff1ea10ef3df7b6acf3e00e19b2bd07b63360c8b001c9bb8ced8793dc927a"} Mar 12 15:07:16 crc kubenswrapper[4832]: I0312 15:07:16.635052 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f161113d-cd3a-4bb1-be7b-70ab01642c94" path="/var/lib/kubelet/pods/f161113d-cd3a-4bb1-be7b-70ab01642c94/volumes" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.157058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0e23234-0687-46a1-8c0f-823c92c0aebc","Type":"ContainerStarted","Data":"5eefbfa7eb669d0a4c862a0ab44eb862143339915cf3bd65f0fcdebf03ea9b81"} Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.698028 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.703233 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pz5qf" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.822495 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q42g8\" (UniqueName: \"kubernetes.io/projected/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-kube-api-access-q42g8\") pod \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.822819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-combined-ca-bundle\") pod \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.822927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-combined-ca-bundle\") pod \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.823051 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-config-data\") pod \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.823162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-fernet-keys\") pod \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.823276 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdded1bd-9b32-465d-9226-618cf5d0e8bb-logs\") pod \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.823436 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-credential-keys\") pod \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.823917 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-scripts\") pod \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.824113 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-scripts\") pod \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\" (UID: \"f1282c2b-8bd5-4beb-a929-b86c1ae950a6\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.824210 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-config-data\") pod \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.824339 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjmb2\" (UniqueName: \"kubernetes.io/projected/bdded1bd-9b32-465d-9226-618cf5d0e8bb-kube-api-access-vjmb2\") pod \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\" (UID: \"bdded1bd-9b32-465d-9226-618cf5d0e8bb\") " Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.823644 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdded1bd-9b32-465d-9226-618cf5d0e8bb-logs" (OuterVolumeSpecName: "logs") pod "bdded1bd-9b32-465d-9226-618cf5d0e8bb" (UID: "bdded1bd-9b32-465d-9226-618cf5d0e8bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.829026 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f1282c2b-8bd5-4beb-a929-b86c1ae950a6" (UID: "f1282c2b-8bd5-4beb-a929-b86c1ae950a6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.829819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-kube-api-access-q42g8" (OuterVolumeSpecName: "kube-api-access-q42g8") pod "f1282c2b-8bd5-4beb-a929-b86c1ae950a6" (UID: "f1282c2b-8bd5-4beb-a929-b86c1ae950a6"). InnerVolumeSpecName "kube-api-access-q42g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.829975 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f1282c2b-8bd5-4beb-a929-b86c1ae950a6" (UID: "f1282c2b-8bd5-4beb-a929-b86c1ae950a6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.835590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-scripts" (OuterVolumeSpecName: "scripts") pod "bdded1bd-9b32-465d-9226-618cf5d0e8bb" (UID: "bdded1bd-9b32-465d-9226-618cf5d0e8bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.839409 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-scripts" (OuterVolumeSpecName: "scripts") pod "f1282c2b-8bd5-4beb-a929-b86c1ae950a6" (UID: "f1282c2b-8bd5-4beb-a929-b86c1ae950a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.839765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdded1bd-9b32-465d-9226-618cf5d0e8bb-kube-api-access-vjmb2" (OuterVolumeSpecName: "kube-api-access-vjmb2") pod "bdded1bd-9b32-465d-9226-618cf5d0e8bb" (UID: "bdded1bd-9b32-465d-9226-618cf5d0e8bb"). InnerVolumeSpecName "kube-api-access-vjmb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.864807 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1282c2b-8bd5-4beb-a929-b86c1ae950a6" (UID: "f1282c2b-8bd5-4beb-a929-b86c1ae950a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.868241 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-config-data" (OuterVolumeSpecName: "config-data") pod "f1282c2b-8bd5-4beb-a929-b86c1ae950a6" (UID: "f1282c2b-8bd5-4beb-a929-b86c1ae950a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.873228 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdded1bd-9b32-465d-9226-618cf5d0e8bb" (UID: "bdded1bd-9b32-465d-9226-618cf5d0e8bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.887709 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-config-data" (OuterVolumeSpecName: "config-data") pod "bdded1bd-9b32-465d-9226-618cf5d0e8bb" (UID: "bdded1bd-9b32-465d-9226-618cf5d0e8bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926532 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjmb2\" (UniqueName: \"kubernetes.io/projected/bdded1bd-9b32-465d-9226-618cf5d0e8bb-kube-api-access-vjmb2\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926867 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q42g8\" (UniqueName: \"kubernetes.io/projected/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-kube-api-access-q42g8\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926882 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926892 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926901 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926909 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926918 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdded1bd-9b32-465d-9226-618cf5d0e8bb-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926925 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926933 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926942 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1282c2b-8bd5-4beb-a929-b86c1ae950a6-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:17 crc kubenswrapper[4832]: I0312 15:07:17.926950 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdded1bd-9b32-465d-9226-618cf5d0e8bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.170660 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0e23234-0687-46a1-8c0f-823c92c0aebc","Type":"ContainerStarted","Data":"ecce77dfc2c2cf511b6ae3e4dbd8a0809b952ff46a2082dc4deebb736927564a"} Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.174729 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pz5qf" event={"ID":"bdded1bd-9b32-465d-9226-618cf5d0e8bb","Type":"ContainerDied","Data":"33c2e8e8b215c2df6e1c7b982ffb2da46592587dcf55f9fb7831c1c18cfe6614"} Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.174765 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33c2e8e8b215c2df6e1c7b982ffb2da46592587dcf55f9fb7831c1c18cfe6614" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.174846 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pz5qf" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.192187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6psc2" event={"ID":"f1282c2b-8bd5-4beb-a929-b86c1ae950a6","Type":"ContainerDied","Data":"b29d62034774e71e46a4d84178b0a943835a83adcf94f758fc698870b2e8c86e"} Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.192221 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b29d62034774e71e46a4d84178b0a943835a83adcf94f758fc698870b2e8c86e" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.192298 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6psc2" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.205645 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerStarted","Data":"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c"} Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.234884 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.23486658 podStartE2EDuration="3.23486658s" podCreationTimestamp="2026-03-12 15:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:18.210592656 +0000 UTC m=+1196.854606882" watchObservedRunningTime="2026-03-12 15:07:18.23486658 +0000 UTC m=+1196.878880796" Mar 12 15:07:18 crc kubenswrapper[4832]: E0312 15:07:18.257877 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdded1bd_9b32_465d_9226_618cf5d0e8bb.slice\": RecentStats: unable to find data in memory cache]" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.337741 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-84975bc55b-p4rz5"] Mar 12 15:07:18 crc kubenswrapper[4832]: E0312 15:07:18.338725 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdded1bd-9b32-465d-9226-618cf5d0e8bb" containerName="placement-db-sync" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.338824 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdded1bd-9b32-465d-9226-618cf5d0e8bb" containerName="placement-db-sync" Mar 12 15:07:18 crc kubenswrapper[4832]: E0312 15:07:18.338910 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1282c2b-8bd5-4beb-a929-b86c1ae950a6" containerName="keystone-bootstrap" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.338963 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1282c2b-8bd5-4beb-a929-b86c1ae950a6" containerName="keystone-bootstrap" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.339180 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdded1bd-9b32-465d-9226-618cf5d0e8bb" containerName="placement-db-sync" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.339239 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1282c2b-8bd5-4beb-a929-b86c1ae950a6" containerName="keystone-bootstrap" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.339857 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.351132 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.351596 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gszf" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.351782 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.351823 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.351862 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.360247 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.360334 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84975bc55b-p4rz5"] Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.381623 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5578794dcb-v62kh"] Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.382958 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.391292 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.391354 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sdzt6" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.391553 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.391587 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.391669 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.410090 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5578794dcb-v62kh"] Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-credential-keys\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-internal-tls-certs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-logs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-public-tls-certs\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-internal-tls-certs\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-fernet-keys\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-combined-ca-bundle\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434957 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-combined-ca-bundle\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.434985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-scripts\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.435000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-config-data\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.435039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-scripts\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.435060 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-config-data\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.435096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-public-tls-certs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.435115 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kv9n\" (UniqueName: \"kubernetes.io/projected/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-kube-api-access-8kv9n\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.435134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv8ct\" (UniqueName: \"kubernetes.io/projected/606a2cdd-10ea-4e32-876c-b2149a2aa921-kube-api-access-xv8ct\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-config-data\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-public-tls-certs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kv9n\" (UniqueName: \"kubernetes.io/projected/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-kube-api-access-8kv9n\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536884 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv8ct\" (UniqueName: \"kubernetes.io/projected/606a2cdd-10ea-4e32-876c-b2149a2aa921-kube-api-access-xv8ct\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-credential-keys\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536927 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-internal-tls-certs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-logs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.536996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-public-tls-certs\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.537030 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-internal-tls-certs\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.537066 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-fernet-keys\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.537082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-combined-ca-bundle\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.537107 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-combined-ca-bundle\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.537134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-scripts\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.537150 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-config-data\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.537190 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-scripts\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.538498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-logs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.542087 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-scripts\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.542436 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-credential-keys\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.542898 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-internal-tls-certs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.546036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-public-tls-certs\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.548279 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-internal-tls-certs\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.548384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-public-tls-certs\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.548614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-combined-ca-bundle\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.549434 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-config-data\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.549486 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-config-data\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.552209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-scripts\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.552849 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-fernet-keys\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.559077 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a2cdd-10ea-4e32-876c-b2149a2aa921-combined-ca-bundle\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.561478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kv9n\" (UniqueName: \"kubernetes.io/projected/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-kube-api-access-8kv9n\") pod \"placement-5578794dcb-v62kh\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.581527 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv8ct\" (UniqueName: \"kubernetes.io/projected/606a2cdd-10ea-4e32-876c-b2149a2aa921-kube-api-access-xv8ct\") pod \"keystone-84975bc55b-p4rz5\" (UID: \"606a2cdd-10ea-4e32-876c-b2149a2aa921\") " pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.663339 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-58b9f48778-gcmpc"] Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.670033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.670170 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.675120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58b9f48778-gcmpc"] Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.708819 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.756721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-internal-tls-certs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.756831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-public-tls-certs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.756959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-config-data\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.757126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6plm\" (UniqueName: \"kubernetes.io/projected/6070e7f1-ea29-422e-9574-77b87a8a9c3b-kube-api-access-h6plm\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.757212 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6070e7f1-ea29-422e-9574-77b87a8a9c3b-logs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.757351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-scripts\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.757429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-combined-ca-bundle\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.859012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6070e7f1-ea29-422e-9574-77b87a8a9c3b-logs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.859056 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-scripts\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.859103 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-combined-ca-bundle\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.859126 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-internal-tls-certs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.859180 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-public-tls-certs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.859198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-config-data\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.859250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6plm\" (UniqueName: \"kubernetes.io/projected/6070e7f1-ea29-422e-9574-77b87a8a9c3b-kube-api-access-h6plm\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.860463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6070e7f1-ea29-422e-9574-77b87a8a9c3b-logs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.876806 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6plm\" (UniqueName: \"kubernetes.io/projected/6070e7f1-ea29-422e-9574-77b87a8a9c3b-kube-api-access-h6plm\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.877265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-combined-ca-bundle\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.881906 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-public-tls-certs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.883598 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-scripts\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.886072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-internal-tls-certs\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:18 crc kubenswrapper[4832]: I0312 15:07:18.896039 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6070e7f1-ea29-422e-9574-77b87a8a9c3b-config-data\") pod \"placement-58b9f48778-gcmpc\" (UID: \"6070e7f1-ea29-422e-9574-77b87a8a9c3b\") " pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.071659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.179653 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84975bc55b-p4rz5"] Mar 12 15:07:19 crc kubenswrapper[4832]: W0312 15:07:19.201147 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod606a2cdd_10ea_4e32_876c_b2149a2aa921.slice/crio-11d80c5fb41084693ba55b4ea7d3b8980738ffa1ab698e6cf68f403d1a0ee669 WatchSource:0}: Error finding container 11d80c5fb41084693ba55b4ea7d3b8980738ffa1ab698e6cf68f403d1a0ee669: Status 404 returned error can't find the container with id 11d80c5fb41084693ba55b4ea7d3b8980738ffa1ab698e6cf68f403d1a0ee669 Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.263480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84975bc55b-p4rz5" event={"ID":"606a2cdd-10ea-4e32-876c-b2149a2aa921","Type":"ContainerStarted","Data":"11d80c5fb41084693ba55b4ea7d3b8980738ffa1ab698e6cf68f403d1a0ee669"} Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.315114 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5578794dcb-v62kh"] Mar 12 15:07:19 crc kubenswrapper[4832]: W0312 15:07:19.388558 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7262d72_51ea_48ae_8dc5_c0cb0d46f69c.slice/crio-f32552e418d4536c8bb6235d250e594cb6787370663d4c9a5f34c265979f7897 WatchSource:0}: Error finding container f32552e418d4536c8bb6235d250e594cb6787370663d4c9a5f34c265979f7897: Status 404 returned error can't find the container with id f32552e418d4536c8bb6235d250e594cb6787370663d4c9a5f34c265979f7897 Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.529829 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.530765 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.704353 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58b9f48778-gcmpc"] Mar 12 15:07:19 crc kubenswrapper[4832]: W0312 15:07:19.757808 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6070e7f1_ea29_422e_9574_77b87a8a9c3b.slice/crio-862baab761e32cb79606abb014e9f5433b91066c02474f88df892bd64e59518a WatchSource:0}: Error finding container 862baab761e32cb79606abb014e9f5433b91066c02474f88df892bd64e59518a: Status 404 returned error can't find the container with id 862baab761e32cb79606abb014e9f5433b91066c02474f88df892bd64e59518a Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.956737 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:07:19 crc kubenswrapper[4832]: I0312 15:07:19.956769 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.339702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5578794dcb-v62kh" event={"ID":"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c","Type":"ContainerStarted","Data":"5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2"} Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.340061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5578794dcb-v62kh" event={"ID":"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c","Type":"ContainerStarted","Data":"f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212"} Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.340076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5578794dcb-v62kh" event={"ID":"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c","Type":"ContainerStarted","Data":"f32552e418d4536c8bb6235d250e594cb6787370663d4c9a5f34c265979f7897"} Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.340094 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.340107 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.350208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84975bc55b-p4rz5" event={"ID":"606a2cdd-10ea-4e32-876c-b2149a2aa921","Type":"ContainerStarted","Data":"0efd3d814ec72b7753b44fa29caa4b333095141aad5f75b53ebb70465a9ddb52"} Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.350760 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.367615 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58b9f48778-gcmpc" event={"ID":"6070e7f1-ea29-422e-9574-77b87a8a9c3b","Type":"ContainerStarted","Data":"ff812c466a4d87a5c6b34f101f6ca905ce75b72c38747f75c687aa249168a8a1"} Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.367859 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58b9f48778-gcmpc" event={"ID":"6070e7f1-ea29-422e-9574-77b87a8a9c3b","Type":"ContainerStarted","Data":"862baab761e32cb79606abb014e9f5433b91066c02474f88df892bd64e59518a"} Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.403784 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5578794dcb-v62kh" podStartSLOduration=2.403769777 podStartE2EDuration="2.403769777s" podCreationTimestamp="2026-03-12 15:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:20.380005728 +0000 UTC m=+1199.024019954" watchObservedRunningTime="2026-03-12 15:07:20.403769777 +0000 UTC m=+1199.047784003" Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.431813 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-84975bc55b-p4rz5" podStartSLOduration=2.43179614 podStartE2EDuration="2.43179614s" podCreationTimestamp="2026-03-12 15:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:20.424302643 +0000 UTC m=+1199.068316869" watchObservedRunningTime="2026-03-12 15:07:20.43179614 +0000 UTC m=+1199.075810366" Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.639967 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.708984 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lp2dv"] Mar 12 15:07:20 crc kubenswrapper[4832]: I0312 15:07:20.709203 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" podUID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerName="dnsmasq-dns" containerID="cri-o://fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e" gracePeriod=10 Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.140429 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.140782 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.228156 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.243686 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.243825 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.328944 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-config\") pod \"ce9a0e38-bff9-4748-85a4-19e165398bae\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.329020 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-svc\") pod \"ce9a0e38-bff9-4748-85a4-19e165398bae\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.329113 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-swift-storage-0\") pod \"ce9a0e38-bff9-4748-85a4-19e165398bae\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.329191 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mldp\" (UniqueName: \"kubernetes.io/projected/ce9a0e38-bff9-4748-85a4-19e165398bae-kube-api-access-4mldp\") pod \"ce9a0e38-bff9-4748-85a4-19e165398bae\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.329314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-sb\") pod \"ce9a0e38-bff9-4748-85a4-19e165398bae\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.329384 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-nb\") pod \"ce9a0e38-bff9-4748-85a4-19e165398bae\" (UID: \"ce9a0e38-bff9-4748-85a4-19e165398bae\") " Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.339041 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9a0e38-bff9-4748-85a4-19e165398bae-kube-api-access-4mldp" (OuterVolumeSpecName: "kube-api-access-4mldp") pod "ce9a0e38-bff9-4748-85a4-19e165398bae" (UID: "ce9a0e38-bff9-4748-85a4-19e165398bae"). InnerVolumeSpecName "kube-api-access-4mldp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.384941 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce9a0e38-bff9-4748-85a4-19e165398bae" (UID: "ce9a0e38-bff9-4748-85a4-19e165398bae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.393182 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerID="fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e" exitCode=0 Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.393234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" event={"ID":"ce9a0e38-bff9-4748-85a4-19e165398bae","Type":"ContainerDied","Data":"fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e"} Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.393262 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" event={"ID":"ce9a0e38-bff9-4748-85a4-19e165398bae","Type":"ContainerDied","Data":"9f30f1245c55c51b1c56f0d888585ca5ab371887f781878d7210d79c25672046"} Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.393277 4832 scope.go:117] "RemoveContainer" containerID="fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.393390 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lp2dv" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.400252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58b9f48778-gcmpc" event={"ID":"6070e7f1-ea29-422e-9574-77b87a8a9c3b","Type":"ContainerStarted","Data":"8afdc65bff69a9c7c069088a9d7590745063f35fca4ba17d057b11d50f7958d9"} Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.401493 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.401534 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.401545 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.401554 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.410326 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-config" (OuterVolumeSpecName: "config") pod "ce9a0e38-bff9-4748-85a4-19e165398bae" (UID: "ce9a0e38-bff9-4748-85a4-19e165398bae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.417035 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce9a0e38-bff9-4748-85a4-19e165398bae" (UID: "ce9a0e38-bff9-4748-85a4-19e165398bae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.429922 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-58b9f48778-gcmpc" podStartSLOduration=3.429902463 podStartE2EDuration="3.429902463s" podCreationTimestamp="2026-03-12 15:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:21.421658454 +0000 UTC m=+1200.065672690" watchObservedRunningTime="2026-03-12 15:07:21.429902463 +0000 UTC m=+1200.073916679" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.430110 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce9a0e38-bff9-4748-85a4-19e165398bae" (UID: "ce9a0e38-bff9-4748-85a4-19e165398bae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.431701 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.431733 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.431747 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.431762 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mldp\" (UniqueName: \"kubernetes.io/projected/ce9a0e38-bff9-4748-85a4-19e165398bae-kube-api-access-4mldp\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.431776 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.455057 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce9a0e38-bff9-4748-85a4-19e165398bae" (UID: "ce9a0e38-bff9-4748-85a4-19e165398bae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.534162 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9a0e38-bff9-4748-85a4-19e165398bae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.732731 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lp2dv"] Mar 12 15:07:21 crc kubenswrapper[4832]: I0312 15:07:21.743643 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lp2dv"] Mar 12 15:07:22 crc kubenswrapper[4832]: I0312 15:07:22.648094 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9a0e38-bff9-4748-85a4-19e165398bae" path="/var/lib/kubelet/pods/ce9a0e38-bff9-4748-85a4-19e165398bae/volumes" Mar 12 15:07:23 crc kubenswrapper[4832]: I0312 15:07:23.425347 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:23 crc kubenswrapper[4832]: I0312 15:07:23.426688 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 15:07:25 crc kubenswrapper[4832]: I0312 15:07:25.589790 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 15:07:25 crc kubenswrapper[4832]: I0312 15:07:25.590096 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 15:07:25 crc kubenswrapper[4832]: I0312 15:07:25.620156 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 15:07:25 crc kubenswrapper[4832]: I0312 15:07:25.637586 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 15:07:25 crc kubenswrapper[4832]: I0312 15:07:25.989116 4832 scope.go:117] "RemoveContainer" containerID="b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.036812 4832 scope.go:117] "RemoveContainer" containerID="fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e" Mar 12 15:07:26 crc kubenswrapper[4832]: E0312 15:07:26.037247 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e\": container with ID starting with fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e not found: ID does not exist" containerID="fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.037284 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e"} err="failed to get container status \"fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e\": rpc error: code = NotFound desc = could not find container \"fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e\": container with ID starting with fe30b392659cf097e2aa65ea53615a417daf44c76232d6017c9d52956932626e not found: ID does not exist" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.037306 4832 scope.go:117] "RemoveContainer" containerID="b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf" Mar 12 15:07:26 crc kubenswrapper[4832]: E0312 15:07:26.037775 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf\": container with ID starting with b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf not found: ID does not exist" containerID="b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.037794 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf"} err="failed to get container status \"b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf\": rpc error: code = NotFound desc = could not find container \"b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf\": container with ID starting with b55fd9b7e82c04e359a691a70ef838dc1e241d3c323f286c97098be1131fb3cf not found: ID does not exist" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.313924 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.313977 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.469652 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-brwnc" event={"ID":"6f2adafe-55f5-4149-893d-bdf63ec5ef7d","Type":"ContainerStarted","Data":"125e7aa951be5b777ab1984fa78a73bf6e1be246e5d0e99f98120952d5b3fcce"} Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.480373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerStarted","Data":"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b"} Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.480447 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.480467 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 15:07:26 crc kubenswrapper[4832]: I0312 15:07:26.486532 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-brwnc" podStartSLOduration=2.48657793 podStartE2EDuration="47.486320762s" podCreationTimestamp="2026-03-12 15:06:39 +0000 UTC" firstStartedPulling="2026-03-12 15:06:41.11531342 +0000 UTC m=+1159.759327646" lastFinishedPulling="2026-03-12 15:07:26.115056252 +0000 UTC m=+1204.759070478" observedRunningTime="2026-03-12 15:07:26.485364225 +0000 UTC m=+1205.129378461" watchObservedRunningTime="2026-03-12 15:07:26.486320762 +0000 UTC m=+1205.130334998" Mar 12 15:07:26 crc kubenswrapper[4832]: E0312 15:07:26.854102 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/ubi9/httpd-24:latest" Mar 12 15:07:26 crc kubenswrapper[4832]: E0312 15:07:26.854282 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqp46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fde061f5-d765-49d5-9cff-77ac3d31dd40): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Mar 12 15:07:26 crc kubenswrapper[4832]: E0312 15:07:26.855516 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway\"" pod="openstack/ceilometer-0" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" Mar 12 15:07:27 crc kubenswrapper[4832]: I0312 15:07:27.487908 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-central-agent" containerID="cri-o://830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713" gracePeriod=30 Mar 12 15:07:27 crc kubenswrapper[4832]: I0312 15:07:27.489649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hkzk" event={"ID":"8a7d0054-4697-4cbb-bc50-18024fc3bfbc","Type":"ContainerStarted","Data":"bb0fe606493cdd02c1840105af702a1ece7476b3cd77e86712d7eeef567cff6b"} Mar 12 15:07:27 crc kubenswrapper[4832]: I0312 15:07:27.493773 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="sg-core" containerID="cri-o://37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b" gracePeriod=30 Mar 12 15:07:27 crc kubenswrapper[4832]: I0312 15:07:27.493767 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-notification-agent" containerID="cri-o://1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c" gracePeriod=30 Mar 12 15:07:27 crc kubenswrapper[4832]: I0312 15:07:27.554286 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2hkzk" podStartSLOduration=3.180759338 podStartE2EDuration="48.554254302s" podCreationTimestamp="2026-03-12 15:06:39 +0000 UTC" firstStartedPulling="2026-03-12 15:06:40.744864864 +0000 UTC m=+1159.388879090" lastFinishedPulling="2026-03-12 15:07:26.118359828 +0000 UTC m=+1204.762374054" observedRunningTime="2026-03-12 15:07:27.520787071 +0000 UTC m=+1206.164801297" watchObservedRunningTime="2026-03-12 15:07:27.554254302 +0000 UTC m=+1206.198268528" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.292934 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-log-httpd\") pod \"fde061f5-d765-49d5-9cff-77ac3d31dd40\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371383 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-combined-ca-bundle\") pod \"fde061f5-d765-49d5-9cff-77ac3d31dd40\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371429 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-sg-core-conf-yaml\") pod \"fde061f5-d765-49d5-9cff-77ac3d31dd40\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371490 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-run-httpd\") pod \"fde061f5-d765-49d5-9cff-77ac3d31dd40\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371553 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-scripts\") pod \"fde061f5-d765-49d5-9cff-77ac3d31dd40\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371574 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-config-data\") pod \"fde061f5-d765-49d5-9cff-77ac3d31dd40\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqp46\" (UniqueName: \"kubernetes.io/projected/fde061f5-d765-49d5-9cff-77ac3d31dd40-kube-api-access-qqp46\") pod \"fde061f5-d765-49d5-9cff-77ac3d31dd40\" (UID: \"fde061f5-d765-49d5-9cff-77ac3d31dd40\") " Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.371878 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fde061f5-d765-49d5-9cff-77ac3d31dd40" (UID: "fde061f5-d765-49d5-9cff-77ac3d31dd40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.372205 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fde061f5-d765-49d5-9cff-77ac3d31dd40" (UID: "fde061f5-d765-49d5-9cff-77ac3d31dd40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.372539 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.372558 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fde061f5-d765-49d5-9cff-77ac3d31dd40-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.378256 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-scripts" (OuterVolumeSpecName: "scripts") pod "fde061f5-d765-49d5-9cff-77ac3d31dd40" (UID: "fde061f5-d765-49d5-9cff-77ac3d31dd40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.389935 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde061f5-d765-49d5-9cff-77ac3d31dd40-kube-api-access-qqp46" (OuterVolumeSpecName: "kube-api-access-qqp46") pod "fde061f5-d765-49d5-9cff-77ac3d31dd40" (UID: "fde061f5-d765-49d5-9cff-77ac3d31dd40"). InnerVolumeSpecName "kube-api-access-qqp46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.401318 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fde061f5-d765-49d5-9cff-77ac3d31dd40" (UID: "fde061f5-d765-49d5-9cff-77ac3d31dd40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.435725 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-config-data" (OuterVolumeSpecName: "config-data") pod "fde061f5-d765-49d5-9cff-77ac3d31dd40" (UID: "fde061f5-d765-49d5-9cff-77ac3d31dd40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.444819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fde061f5-d765-49d5-9cff-77ac3d31dd40" (UID: "fde061f5-d765-49d5-9cff-77ac3d31dd40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.474155 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.474932 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqp46\" (UniqueName: \"kubernetes.io/projected/fde061f5-d765-49d5-9cff-77ac3d31dd40-kube-api-access-qqp46\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.474964 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.474978 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.474992 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.475003 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde061f5-d765-49d5-9cff-77ac3d31dd40-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510160 4832 generic.go:334] "Generic (PLEG): container finished" podID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerID="37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b" exitCode=2 Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510453 4832 generic.go:334] "Generic (PLEG): container finished" podID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerID="1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c" exitCode=0 Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510553 4832 generic.go:334] "Generic (PLEG): container finished" podID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerID="830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713" exitCode=0 Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510258 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerDied","Data":"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b"} Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510763 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerDied","Data":"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c"} Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerDied","Data":"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713"} Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fde061f5-d765-49d5-9cff-77ac3d31dd40","Type":"ContainerDied","Data":"833b6e5c81219070f273fac99c1b9302ac25733551609e590c92a352b7f39640"} Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.510813 4832 scope.go:117] "RemoveContainer" containerID="37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.511128 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.553962 4832 scope.go:117] "RemoveContainer" containerID="1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.578008 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.585258 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.593598 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.594396 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerName="dnsmasq-dns" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594411 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerName="dnsmasq-dns" Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.594421 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-central-agent" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594427 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-central-agent" Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.594437 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerName="init" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594443 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerName="init" Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.594458 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="sg-core" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594464 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="sg-core" Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.594493 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-notification-agent" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594503 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-notification-agent" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594660 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-notification-agent" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594676 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9a0e38-bff9-4748-85a4-19e165398bae" containerName="dnsmasq-dns" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594690 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="ceilometer-central-agent" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.594698 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" containerName="sg-core" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.596121 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.613761 4832 scope.go:117] "RemoveContainer" containerID="830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.614094 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.614258 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.615673 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.637392 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde061f5-d765-49d5-9cff-77ac3d31dd40" path="/var/lib/kubelet/pods/fde061f5-d765-49d5-9cff-77ac3d31dd40/volumes" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.658749 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.679578 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxdj\" (UniqueName: \"kubernetes.io/projected/9a7d1ce7-a309-4632-9396-7625995d919a-kube-api-access-5fxdj\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.679635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-scripts\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.679684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-run-httpd\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.679741 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.679765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-log-httpd\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.680116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.680163 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-config-data\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.710280 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.711176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-5fxdj log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="9a7d1ce7-a309-4632-9396-7625995d919a" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.732690 4832 scope.go:117] "RemoveContainer" containerID="37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b" Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.736622 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b\": container with ID starting with 37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b not found: ID does not exist" containerID="37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.736669 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b"} err="failed to get container status \"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b\": rpc error: code = NotFound desc = could not find container \"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b\": container with ID starting with 37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.736690 4832 scope.go:117] "RemoveContainer" containerID="1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c" Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.738938 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c\": container with ID starting with 1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c not found: ID does not exist" containerID="1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.738986 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c"} err="failed to get container status \"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c\": rpc error: code = NotFound desc = could not find container \"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c\": container with ID starting with 1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.739019 4832 scope.go:117] "RemoveContainer" containerID="830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713" Mar 12 15:07:28 crc kubenswrapper[4832]: E0312 15:07:28.740946 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713\": container with ID starting with 830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713 not found: ID does not exist" containerID="830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.740971 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713"} err="failed to get container status \"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713\": rpc error: code = NotFound desc = could not find container \"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713\": container with ID starting with 830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713 not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.740989 4832 scope.go:117] "RemoveContainer" containerID="37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.745871 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b"} err="failed to get container status \"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b\": rpc error: code = NotFound desc = could not find container \"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b\": container with ID starting with 37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.745906 4832 scope.go:117] "RemoveContainer" containerID="1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.746615 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c"} err="failed to get container status \"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c\": rpc error: code = NotFound desc = could not find container \"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c\": container with ID starting with 1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.746632 4832 scope.go:117] "RemoveContainer" containerID="830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.746838 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713"} err="failed to get container status \"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713\": rpc error: code = NotFound desc = could not find container \"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713\": container with ID starting with 830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713 not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.746851 4832 scope.go:117] "RemoveContainer" containerID="37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.747041 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b"} err="failed to get container status \"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b\": rpc error: code = NotFound desc = could not find container \"37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b\": container with ID starting with 37687c250eed35aab37dacbb6210976227b0ca6ba2dfc8ec5ae7b7f07afd580b not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.747063 4832 scope.go:117] "RemoveContainer" containerID="1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.747254 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c"} err="failed to get container status \"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c\": rpc error: code = NotFound desc = could not find container \"1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c\": container with ID starting with 1a7c4d248503ead32d855ceb7422ea48112abe197de794b16ff4d2eed28e917c not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.747270 4832 scope.go:117] "RemoveContainer" containerID="830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.747455 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713"} err="failed to get container status \"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713\": rpc error: code = NotFound desc = could not find container \"830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713\": container with ID starting with 830cd679b8f0b844850b477e52b46ce25edf218d904a127325fd5ad8be841713 not found: ID does not exist" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.781499 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.781575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-config-data\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.781616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxdj\" (UniqueName: \"kubernetes.io/projected/9a7d1ce7-a309-4632-9396-7625995d919a-kube-api-access-5fxdj\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.781641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-scripts\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.781677 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-run-httpd\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.781705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.781729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-log-httpd\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.782238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-log-httpd\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.789146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.792626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-run-httpd\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.795375 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-scripts\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.797334 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-config-data\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.803186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:28 crc kubenswrapper[4832]: I0312 15:07:28.811622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxdj\" (UniqueName: \"kubernetes.io/projected/9a7d1ce7-a309-4632-9396-7625995d919a-kube-api-access-5fxdj\") pod \"ceilometer-0\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " pod="openstack/ceilometer-0" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.522072 4832 generic.go:334] "Generic (PLEG): container finished" podID="6f2adafe-55f5-4149-893d-bdf63ec5ef7d" containerID="125e7aa951be5b777ab1984fa78a73bf6e1be246e5d0e99f98120952d5b3fcce" exitCode=0 Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.522154 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-brwnc" event={"ID":"6f2adafe-55f5-4149-893d-bdf63ec5ef7d","Type":"ContainerDied","Data":"125e7aa951be5b777ab1984fa78a73bf6e1be246e5d0e99f98120952d5b3fcce"} Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.523853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.533615 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-745cdbf99b-kdz5c" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.534599 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.593210 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-log-httpd\") pod \"9a7d1ce7-a309-4632-9396-7625995d919a\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.593265 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-scripts\") pod \"9a7d1ce7-a309-4632-9396-7625995d919a\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.593294 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-run-httpd\") pod \"9a7d1ce7-a309-4632-9396-7625995d919a\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.593404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-combined-ca-bundle\") pod \"9a7d1ce7-a309-4632-9396-7625995d919a\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.593462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-config-data\") pod \"9a7d1ce7-a309-4632-9396-7625995d919a\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.593498 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-sg-core-conf-yaml\") pod \"9a7d1ce7-a309-4632-9396-7625995d919a\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.593568 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fxdj\" (UniqueName: \"kubernetes.io/projected/9a7d1ce7-a309-4632-9396-7625995d919a-kube-api-access-5fxdj\") pod \"9a7d1ce7-a309-4632-9396-7625995d919a\" (UID: \"9a7d1ce7-a309-4632-9396-7625995d919a\") " Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.594100 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9a7d1ce7-a309-4632-9396-7625995d919a" (UID: "9a7d1ce7-a309-4632-9396-7625995d919a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.594284 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9a7d1ce7-a309-4632-9396-7625995d919a" (UID: "9a7d1ce7-a309-4632-9396-7625995d919a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.595816 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.595871 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a7d1ce7-a309-4632-9396-7625995d919a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.598611 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a7d1ce7-a309-4632-9396-7625995d919a" (UID: "9a7d1ce7-a309-4632-9396-7625995d919a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.600762 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7d1ce7-a309-4632-9396-7625995d919a-kube-api-access-5fxdj" (OuterVolumeSpecName: "kube-api-access-5fxdj") pod "9a7d1ce7-a309-4632-9396-7625995d919a" (UID: "9a7d1ce7-a309-4632-9396-7625995d919a"). InnerVolumeSpecName "kube-api-access-5fxdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.608680 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-config-data" (OuterVolumeSpecName: "config-data") pod "9a7d1ce7-a309-4632-9396-7625995d919a" (UID: "9a7d1ce7-a309-4632-9396-7625995d919a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.608728 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-scripts" (OuterVolumeSpecName: "scripts") pod "9a7d1ce7-a309-4632-9396-7625995d919a" (UID: "9a7d1ce7-a309-4632-9396-7625995d919a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.608774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9a7d1ce7-a309-4632-9396-7625995d919a" (UID: "9a7d1ce7-a309-4632-9396-7625995d919a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.698002 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.698045 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.698060 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.698071 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a7d1ce7-a309-4632-9396-7625995d919a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.698083 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fxdj\" (UniqueName: \"kubernetes.io/projected/9a7d1ce7-a309-4632-9396-7625995d919a-kube-api-access-5fxdj\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:29 crc kubenswrapper[4832]: I0312 15:07:29.958878 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c5974b5d4-dhhm8" podUID="06633b31-01e2-4a1c-bf9e-e74b157fba1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.532244 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.669805 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.673309 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.680690 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.682789 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.684980 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.685389 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.695133 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.830659 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.830725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-config-data\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.830745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-scripts\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.830811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.830858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-run-httpd\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.830985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cn8s\" (UniqueName: \"kubernetes.io/projected/b235a23d-a7da-4545-8047-36a5c88b66bb-kube-api-access-7cn8s\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.831098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-log-httpd\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.929625 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-brwnc" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-run-httpd\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cn8s\" (UniqueName: \"kubernetes.io/projected/b235a23d-a7da-4545-8047-36a5c88b66bb-kube-api-access-7cn8s\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-log-httpd\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-config-data\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932813 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-scripts\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.932863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-run-httpd\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.935863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-log-httpd\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.950137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.950167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.950801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-scripts\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.953664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-config-data\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:30 crc kubenswrapper[4832]: I0312 15:07:30.968445 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cn8s\" (UniqueName: \"kubernetes.io/projected/b235a23d-a7da-4545-8047-36a5c88b66bb-kube-api-access-7cn8s\") pod \"ceilometer-0\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " pod="openstack/ceilometer-0" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.004574 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.033603 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-combined-ca-bundle\") pod \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.033707 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgrgc\" (UniqueName: \"kubernetes.io/projected/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-kube-api-access-wgrgc\") pod \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.033933 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-db-sync-config-data\") pod \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\" (UID: \"6f2adafe-55f5-4149-893d-bdf63ec5ef7d\") " Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.037700 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-kube-api-access-wgrgc" (OuterVolumeSpecName: "kube-api-access-wgrgc") pod "6f2adafe-55f5-4149-893d-bdf63ec5ef7d" (UID: "6f2adafe-55f5-4149-893d-bdf63ec5ef7d"). InnerVolumeSpecName "kube-api-access-wgrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.041687 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6f2adafe-55f5-4149-893d-bdf63ec5ef7d" (UID: "6f2adafe-55f5-4149-893d-bdf63ec5ef7d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.061269 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f2adafe-55f5-4149-893d-bdf63ec5ef7d" (UID: "6f2adafe-55f5-4149-893d-bdf63ec5ef7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.135966 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.135994 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.136004 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgrgc\" (UniqueName: \"kubernetes.io/projected/6f2adafe-55f5-4149-893d-bdf63ec5ef7d-kube-api-access-wgrgc\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.498797 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:07:31 crc kubenswrapper[4832]: W0312 15:07:31.506186 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb235a23d_a7da_4545_8047_36a5c88b66bb.slice/crio-9cd10fa3cb05354c0da5c497fcd240730dfbb46486ce43b1e0243cfdaa7149f9 WatchSource:0}: Error finding container 9cd10fa3cb05354c0da5c497fcd240730dfbb46486ce43b1e0243cfdaa7149f9: Status 404 returned error can't find the container with id 9cd10fa3cb05354c0da5c497fcd240730dfbb46486ce43b1e0243cfdaa7149f9 Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.545478 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-brwnc" event={"ID":"6f2adafe-55f5-4149-893d-bdf63ec5ef7d","Type":"ContainerDied","Data":"9d82a762554b0fee4c1c86fca26ff10dfa54b136e9a5a6db11413f2a96224ab7"} Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.545534 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d82a762554b0fee4c1c86fca26ff10dfa54b136e9a5a6db11413f2a96224ab7" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.545535 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-brwnc" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.549743 4832 generic.go:334] "Generic (PLEG): container finished" podID="8a7d0054-4697-4cbb-bc50-18024fc3bfbc" containerID="bb0fe606493cdd02c1840105af702a1ece7476b3cd77e86712d7eeef567cff6b" exitCode=0 Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.549876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hkzk" event={"ID":"8a7d0054-4697-4cbb-bc50-18024fc3bfbc","Type":"ContainerDied","Data":"bb0fe606493cdd02c1840105af702a1ece7476b3cd77e86712d7eeef567cff6b"} Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.565251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerStarted","Data":"9cd10fa3cb05354c0da5c497fcd240730dfbb46486ce43b1e0243cfdaa7149f9"} Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.810350 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5586cfd7d8-lcjh2"] Mar 12 15:07:31 crc kubenswrapper[4832]: E0312 15:07:31.810801 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2adafe-55f5-4149-893d-bdf63ec5ef7d" containerName="barbican-db-sync" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.810816 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2adafe-55f5-4149-893d-bdf63ec5ef7d" containerName="barbican-db-sync" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.811039 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f2adafe-55f5-4149-893d-bdf63ec5ef7d" containerName="barbican-db-sync" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.840621 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5456b889d5-mb698"] Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.840977 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.846719 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.847338 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.848384 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5586cfd7d8-lcjh2"] Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.858821 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.848289 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.848362 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5t7mn" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.867109 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5456b889d5-mb698"] Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.952604 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-vqwdw"] Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.953979 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.966866 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-config-data\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.966904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-config-data-custom\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.966934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsgg\" (UniqueName: \"kubernetes.io/projected/44661885-c36b-4450-b181-4bfa5f442420-kube-api-access-ctsgg\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.966955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-config-data\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.967000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f29ee66-6d6f-4940-9283-7bd2bff068b6-logs\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.967019 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5pk\" (UniqueName: \"kubernetes.io/projected/4f29ee66-6d6f-4940-9283-7bd2bff068b6-kube-api-access-6l5pk\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.967045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44661885-c36b-4450-b181-4bfa5f442420-logs\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.967078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-config-data-custom\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.967099 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-combined-ca-bundle\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.967173 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-combined-ca-bundle\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:31 crc kubenswrapper[4832]: I0312 15:07:31.984255 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-vqwdw"] Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.022935 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f7c7fbfc6-5ms49"] Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.025023 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.030366 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.050774 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f7c7fbfc6-5ms49"] Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069463 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f29ee66-6d6f-4940-9283-7bd2bff068b6-logs\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069537 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5pk\" (UniqueName: \"kubernetes.io/projected/4f29ee66-6d6f-4940-9283-7bd2bff068b6-kube-api-access-6l5pk\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069574 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44661885-c36b-4450-b181-4bfa5f442420-logs\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-config\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069652 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069690 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-config-data-custom\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-combined-ca-bundle\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069854 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-combined-ca-bundle\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-config-data\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-config-data-custom\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069915 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnv6p\" (UniqueName: \"kubernetes.io/projected/957bf113-e0d7-4a03-a4dc-2968a44f66f6-kube-api-access-cnv6p\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsgg\" (UniqueName: \"kubernetes.io/projected/44661885-c36b-4450-b181-4bfa5f442420-kube-api-access-ctsgg\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.069962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-config-data\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.070002 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.070417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f29ee66-6d6f-4940-9283-7bd2bff068b6-logs\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.070960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44661885-c36b-4450-b181-4bfa5f442420-logs\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.081331 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-config-data-custom\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.081362 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-combined-ca-bundle\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.082763 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-combined-ca-bundle\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.094238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-config-data-custom\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.097055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsgg\" (UniqueName: \"kubernetes.io/projected/44661885-c36b-4450-b181-4bfa5f442420-kube-api-access-ctsgg\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.097473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5pk\" (UniqueName: \"kubernetes.io/projected/4f29ee66-6d6f-4940-9283-7bd2bff068b6-kube-api-access-6l5pk\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.097632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44661885-c36b-4450-b181-4bfa5f442420-config-data\") pod \"barbican-worker-5456b889d5-mb698\" (UID: \"44661885-c36b-4450-b181-4bfa5f442420\") " pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.101062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f29ee66-6d6f-4940-9283-7bd2bff068b6-config-data\") pod \"barbican-keystone-listener-5586cfd7d8-lcjh2\" (UID: \"4f29ee66-6d6f-4940-9283-7bd2bff068b6\") " pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-config\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-combined-ca-bundle\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172820 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data-custom\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjmhl\" (UniqueName: \"kubernetes.io/projected/a7becc0c-3efc-4992-ab31-6c67fd190769-kube-api-access-pjmhl\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172929 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7becc0c-3efc-4992-ab31-6c67fd190769-logs\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.172958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnv6p\" (UniqueName: \"kubernetes.io/projected/957bf113-e0d7-4a03-a4dc-2968a44f66f6-kube-api-access-cnv6p\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.173033 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.174008 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-config\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.174027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.174097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.174095 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.174616 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.195399 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnv6p\" (UniqueName: \"kubernetes.io/projected/957bf113-e0d7-4a03-a4dc-2968a44f66f6-kube-api-access-cnv6p\") pod \"dnsmasq-dns-848cf88cfc-vqwdw\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.242015 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.251096 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5456b889d5-mb698" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.272901 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.274008 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data-custom\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.274039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjmhl\" (UniqueName: \"kubernetes.io/projected/a7becc0c-3efc-4992-ab31-6c67fd190769-kube-api-access-pjmhl\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.274084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7becc0c-3efc-4992-ab31-6c67fd190769-logs\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.274166 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.274204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-combined-ca-bundle\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.275355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7becc0c-3efc-4992-ab31-6c67fd190769-logs\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.280476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data-custom\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.280791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-combined-ca-bundle\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.284156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.294237 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjmhl\" (UniqueName: \"kubernetes.io/projected/a7becc0c-3efc-4992-ab31-6c67fd190769-kube-api-access-pjmhl\") pod \"barbican-api-7f7c7fbfc6-5ms49\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.542793 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.581211 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerStarted","Data":"c0fe9aec06c6585df3e5716ff58fdf792efa19a99db7c42bdd5c820ec5e575cb"} Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.682045 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7d1ce7-a309-4632-9396-7625995d919a" path="/var/lib/kubelet/pods/9a7d1ce7-a309-4632-9396-7625995d919a/volumes" Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.734462 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5586cfd7d8-lcjh2"] Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.801439 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5456b889d5-mb698"] Mar 12 15:07:32 crc kubenswrapper[4832]: I0312 15:07:32.818541 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-vqwdw"] Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.194223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f7c7fbfc6-5ms49"] Mar 12 15:07:33 crc kubenswrapper[4832]: W0312 15:07:33.211608 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7becc0c_3efc_4992_ab31_6c67fd190769.slice/crio-96b235365eae397e46c7d51da0751eba2936a6496b14369898c4fbb4408bebf2 WatchSource:0}: Error finding container 96b235365eae397e46c7d51da0751eba2936a6496b14369898c4fbb4408bebf2: Status 404 returned error can't find the container with id 96b235365eae397e46c7d51da0751eba2936a6496b14369898c4fbb4408bebf2 Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.256059 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.407808 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdjb\" (UniqueName: \"kubernetes.io/projected/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-kube-api-access-8jdjb\") pod \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.408082 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-combined-ca-bundle\") pod \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.408123 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-config-data\") pod \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.408163 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-etc-machine-id\") pod \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.408215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-db-sync-config-data\") pod \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.408243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-scripts\") pod \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\" (UID: \"8a7d0054-4697-4cbb-bc50-18024fc3bfbc\") " Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.408937 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8a7d0054-4697-4cbb-bc50-18024fc3bfbc" (UID: "8a7d0054-4697-4cbb-bc50-18024fc3bfbc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.412245 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-scripts" (OuterVolumeSpecName: "scripts") pod "8a7d0054-4697-4cbb-bc50-18024fc3bfbc" (UID: "8a7d0054-4697-4cbb-bc50-18024fc3bfbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.412266 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-kube-api-access-8jdjb" (OuterVolumeSpecName: "kube-api-access-8jdjb") pod "8a7d0054-4697-4cbb-bc50-18024fc3bfbc" (UID: "8a7d0054-4697-4cbb-bc50-18024fc3bfbc"). InnerVolumeSpecName "kube-api-access-8jdjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.413637 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8a7d0054-4697-4cbb-bc50-18024fc3bfbc" (UID: "8a7d0054-4697-4cbb-bc50-18024fc3bfbc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.444630 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a7d0054-4697-4cbb-bc50-18024fc3bfbc" (UID: "8a7d0054-4697-4cbb-bc50-18024fc3bfbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.487659 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-config-data" (OuterVolumeSpecName: "config-data") pod "8a7d0054-4697-4cbb-bc50-18024fc3bfbc" (UID: "8a7d0054-4697-4cbb-bc50-18024fc3bfbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.510859 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.510898 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.510913 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.510927 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.510940 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdjb\" (UniqueName: \"kubernetes.io/projected/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-kube-api-access-8jdjb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.510952 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7d0054-4697-4cbb-bc50-18024fc3bfbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.588848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5456b889d5-mb698" event={"ID":"44661885-c36b-4450-b181-4bfa5f442420","Type":"ContainerStarted","Data":"6c70beda8d85503fa856db9adf71105b0309942c20839ec48a0053fd405d0630"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.590305 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerStarted","Data":"e14f155849ccf32d23a85d174aa6ae6ae743f5b69f1f8002159a0126c33000f0"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.594728 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" event={"ID":"4f29ee66-6d6f-4940-9283-7bd2bff068b6","Type":"ContainerStarted","Data":"0e497cba1f4b74437a11ea603a4221aa395299a98ea617fe50ba81f57b7f2fb3"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.596968 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" event={"ID":"a7becc0c-3efc-4992-ab31-6c67fd190769","Type":"ContainerStarted","Data":"1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.597013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" event={"ID":"a7becc0c-3efc-4992-ab31-6c67fd190769","Type":"ContainerStarted","Data":"914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.597023 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" event={"ID":"a7becc0c-3efc-4992-ab31-6c67fd190769","Type":"ContainerStarted","Data":"96b235365eae397e46c7d51da0751eba2936a6496b14369898c4fbb4408bebf2"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.597078 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.599148 4832 generic.go:334] "Generic (PLEG): container finished" podID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerID="cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849" exitCode=0 Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.599212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" event={"ID":"957bf113-e0d7-4a03-a4dc-2968a44f66f6","Type":"ContainerDied","Data":"cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.599234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" event={"ID":"957bf113-e0d7-4a03-a4dc-2968a44f66f6","Type":"ContainerStarted","Data":"487068fdbaac6554b63ca5bb853e6cdde89d5ed36d95c7f42f5668f94dfbb164"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.602993 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hkzk" event={"ID":"8a7d0054-4697-4cbb-bc50-18024fc3bfbc","Type":"ContainerDied","Data":"60e260f83e96574ad3b6326fc262920a98b192e3b1c0a03bbb91aa319ae2688d"} Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.603037 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e260f83e96574ad3b6326fc262920a98b192e3b1c0a03bbb91aa319ae2688d" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.603092 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hkzk" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.647618 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" podStartSLOduration=2.64759821 podStartE2EDuration="2.64759821s" podCreationTimestamp="2026-03-12 15:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:33.626201529 +0000 UTC m=+1212.270215755" watchObservedRunningTime="2026-03-12 15:07:33.64759821 +0000 UTC m=+1212.291612436" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.920034 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:33 crc kubenswrapper[4832]: E0312 15:07:33.920696 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7d0054-4697-4cbb-bc50-18024fc3bfbc" containerName="cinder-db-sync" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.920709 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7d0054-4697-4cbb-bc50-18024fc3bfbc" containerName="cinder-db-sync" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.920886 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7d0054-4697-4cbb-bc50-18024fc3bfbc" containerName="cinder-db-sync" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.921772 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.927478 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.927665 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-g729q" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.927787 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.927882 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.961773 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:33 crc kubenswrapper[4832]: I0312 15:07:33.986784 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-vqwdw"] Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.019836 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.019878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.019907 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvmb\" (UniqueName: \"kubernetes.io/projected/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-kube-api-access-kbvmb\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.019928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.019989 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.020031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-scripts\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.032608 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zj8ll"] Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.036141 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.044139 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zj8ll"] Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.123785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.123886 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.123910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-scripts\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.123931 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.123983 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.124043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.124069 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-config\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.124089 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.124122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbvmb\" (UniqueName: \"kubernetes.io/projected/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-kube-api-access-kbvmb\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.124163 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.124193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.124245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlhnb\" (UniqueName: \"kubernetes.io/projected/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-kube-api-access-qlhnb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.135308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.136638 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.136901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.149309 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-scripts\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.168394 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.190301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbvmb\" (UniqueName: \"kubernetes.io/projected/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-kube-api-access-kbvmb\") pod \"cinder-scheduler-0\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.225841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.225907 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.225963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.226029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-config\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.226080 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.226129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlhnb\" (UniqueName: \"kubernetes.io/projected/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-kube-api-access-qlhnb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.227387 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.227474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.228058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-config\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.235195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.237248 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.265975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.281755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlhnb\" (UniqueName: \"kubernetes.io/projected/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-kube-api-access-qlhnb\") pod \"dnsmasq-dns-6578955fd5-zj8ll\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.320969 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.322561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.342120 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.367164 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.388428 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.430556 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-scripts\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.430622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.430662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.430759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.431060 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d59457b0-c009-4522-9ac1-5709e17986dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.431147 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d59457b0-c009-4522-9ac1-5709e17986dc-logs\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.431178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29r45\" (UniqueName: \"kubernetes.io/projected/d59457b0-c009-4522-9ac1-5709e17986dc-kube-api-access-29r45\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.532864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.532940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d59457b0-c009-4522-9ac1-5709e17986dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.532963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d59457b0-c009-4522-9ac1-5709e17986dc-logs\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.532977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29r45\" (UniqueName: \"kubernetes.io/projected/d59457b0-c009-4522-9ac1-5709e17986dc-kube-api-access-29r45\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.533005 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-scripts\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.533027 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.533059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.534096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d59457b0-c009-4522-9ac1-5709e17986dc-logs\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.534621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d59457b0-c009-4522-9ac1-5709e17986dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.536844 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.538047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.539659 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-scripts\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.541332 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.553427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29r45\" (UniqueName: \"kubernetes.io/projected/d59457b0-c009-4522-9ac1-5709e17986dc-kube-api-access-29r45\") pod \"cinder-api-0\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " pod="openstack/cinder-api-0" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.611086 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:34 crc kubenswrapper[4832]: I0312 15:07:34.661268 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.274646 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:35 crc kubenswrapper[4832]: W0312 15:07:35.283303 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00f04fdf_70d4_43a9_9f36_b3c3d0c36b55.slice/crio-a7fe8e14af882ffe1efa1f373507e8bc0d5e54e7e37a1e54c4df3d02f445540e WatchSource:0}: Error finding container a7fe8e14af882ffe1efa1f373507e8bc0d5e54e7e37a1e54c4df3d02f445540e: Status 404 returned error can't find the container with id a7fe8e14af882ffe1efa1f373507e8bc0d5e54e7e37a1e54c4df3d02f445540e Mar 12 15:07:35 crc kubenswrapper[4832]: W0312 15:07:35.284961 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd59457b0_c009_4522_9ac1_5709e17986dc.slice/crio-1337dd0f50d7f487feb4c55b5e4d39ef840489ef3bbf7925b0a35f8759da16e0 WatchSource:0}: Error finding container 1337dd0f50d7f487feb4c55b5e4d39ef840489ef3bbf7925b0a35f8759da16e0: Status 404 returned error can't find the container with id 1337dd0f50d7f487feb4c55b5e4d39ef840489ef3bbf7925b0a35f8759da16e0 Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.287363 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.463203 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zj8ll"] Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.633088 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55","Type":"ContainerStarted","Data":"a7fe8e14af882ffe1efa1f373507e8bc0d5e54e7e37a1e54c4df3d02f445540e"} Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.635011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" event={"ID":"957bf113-e0d7-4a03-a4dc-2968a44f66f6","Type":"ContainerStarted","Data":"6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe"} Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.635175 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" podUID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerName="dnsmasq-dns" containerID="cri-o://6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe" gracePeriod=10 Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.635451 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.639140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" event={"ID":"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36","Type":"ContainerStarted","Data":"c76fcc295c7a4e73d2bf5cce2b691a52a540f5777cb461c25d03a577ac53e27d"} Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.640676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d59457b0-c009-4522-9ac1-5709e17986dc","Type":"ContainerStarted","Data":"1337dd0f50d7f487feb4c55b5e4d39ef840489ef3bbf7925b0a35f8759da16e0"} Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.645126 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerStarted","Data":"5f3bd740415bbd833dfadec8f2fa85f199c0de8f1aa08f389f3fc7cb1a5ece1c"} Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.649823 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" event={"ID":"4f29ee66-6d6f-4940-9283-7bd2bff068b6","Type":"ContainerStarted","Data":"8e4f06814c340e9069ba4d377ecbf8fae6f84c5499e7bcbcaf798da9fad5c5e9"} Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.649895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" event={"ID":"4f29ee66-6d6f-4940-9283-7bd2bff068b6","Type":"ContainerStarted","Data":"a8293ebbfdf93b22cd84bd0076405ea74b7248ae9458bdd8c889527392cc572a"} Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.658719 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" podStartSLOduration=4.658698388 podStartE2EDuration="4.658698388s" podCreationTimestamp="2026-03-12 15:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:35.651464769 +0000 UTC m=+1214.295478995" watchObservedRunningTime="2026-03-12 15:07:35.658698388 +0000 UTC m=+1214.302712614" Mar 12 15:07:35 crc kubenswrapper[4832]: I0312 15:07:35.671929 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5586cfd7d8-lcjh2" podStartSLOduration=2.616488077 podStartE2EDuration="4.671907242s" podCreationTimestamp="2026-03-12 15:07:31 +0000 UTC" firstStartedPulling="2026-03-12 15:07:32.752606217 +0000 UTC m=+1211.396620443" lastFinishedPulling="2026-03-12 15:07:34.808025382 +0000 UTC m=+1213.452039608" observedRunningTime="2026-03-12 15:07:35.668775801 +0000 UTC m=+1214.312790017" watchObservedRunningTime="2026-03-12 15:07:35.671907242 +0000 UTC m=+1214.315921468" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.251096 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.381995 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-svc\") pod \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.382323 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnv6p\" (UniqueName: \"kubernetes.io/projected/957bf113-e0d7-4a03-a4dc-2968a44f66f6-kube-api-access-cnv6p\") pod \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.382354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-config\") pod \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.382373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-nb\") pod \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.382534 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-sb\") pod \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.382565 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-swift-storage-0\") pod \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\" (UID: \"957bf113-e0d7-4a03-a4dc-2968a44f66f6\") " Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.386756 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957bf113-e0d7-4a03-a4dc-2968a44f66f6-kube-api-access-cnv6p" (OuterVolumeSpecName: "kube-api-access-cnv6p") pod "957bf113-e0d7-4a03-a4dc-2968a44f66f6" (UID: "957bf113-e0d7-4a03-a4dc-2968a44f66f6"). InnerVolumeSpecName "kube-api-access-cnv6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.440036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "957bf113-e0d7-4a03-a4dc-2968a44f66f6" (UID: "957bf113-e0d7-4a03-a4dc-2968a44f66f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.450036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-config" (OuterVolumeSpecName: "config") pod "957bf113-e0d7-4a03-a4dc-2968a44f66f6" (UID: "957bf113-e0d7-4a03-a4dc-2968a44f66f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.457482 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "957bf113-e0d7-4a03-a4dc-2968a44f66f6" (UID: "957bf113-e0d7-4a03-a4dc-2968a44f66f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.464809 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "957bf113-e0d7-4a03-a4dc-2968a44f66f6" (UID: "957bf113-e0d7-4a03-a4dc-2968a44f66f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.477845 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "957bf113-e0d7-4a03-a4dc-2968a44f66f6" (UID: "957bf113-e0d7-4a03-a4dc-2968a44f66f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.484942 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.484979 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.484993 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.485008 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnv6p\" (UniqueName: \"kubernetes.io/projected/957bf113-e0d7-4a03-a4dc-2968a44f66f6-kube-api-access-cnv6p\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.485023 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.485034 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957bf113-e0d7-4a03-a4dc-2968a44f66f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.669092 4832 generic.go:334] "Generic (PLEG): container finished" podID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerID="6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f" exitCode=0 Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.669152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" event={"ID":"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36","Type":"ContainerDied","Data":"6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f"} Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.684631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d59457b0-c009-4522-9ac1-5709e17986dc","Type":"ContainerStarted","Data":"4ec758ebd9510535a3651b64775024e656fc291cf716b9dcffb6c8ad3bee2c48"} Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.697792 4832 generic.go:334] "Generic (PLEG): container finished" podID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerID="6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe" exitCode=0 Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.697880 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.697894 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" event={"ID":"957bf113-e0d7-4a03-a4dc-2968a44f66f6","Type":"ContainerDied","Data":"6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe"} Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.697927 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-vqwdw" event={"ID":"957bf113-e0d7-4a03-a4dc-2968a44f66f6","Type":"ContainerDied","Data":"487068fdbaac6554b63ca5bb853e6cdde89d5ed36d95c7f42f5668f94dfbb164"} Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.697948 4832 scope.go:117] "RemoveContainer" containerID="6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.705261 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5456b889d5-mb698" event={"ID":"44661885-c36b-4450-b181-4bfa5f442420","Type":"ContainerStarted","Data":"28fe616d0cd2897e32934cd52c167878ff918a76af8c9c4f1836263cd14117c4"} Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.705307 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5456b889d5-mb698" event={"ID":"44661885-c36b-4450-b181-4bfa5f442420","Type":"ContainerStarted","Data":"0b0450d393ff11b702fd75d199d918508053f2abfa0fd301826cae04b559ec7f"} Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.726679 4832 scope.go:117] "RemoveContainer" containerID="cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.734850 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-vqwdw"] Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.744591 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-vqwdw"] Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.769562 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5456b889d5-mb698" podStartSLOduration=2.663234773 podStartE2EDuration="5.769542652s" podCreationTimestamp="2026-03-12 15:07:31 +0000 UTC" firstStartedPulling="2026-03-12 15:07:32.834125352 +0000 UTC m=+1211.478139578" lastFinishedPulling="2026-03-12 15:07:35.940433231 +0000 UTC m=+1214.584447457" observedRunningTime="2026-03-12 15:07:36.733087645 +0000 UTC m=+1215.377101881" watchObservedRunningTime="2026-03-12 15:07:36.769542652 +0000 UTC m=+1215.413556878" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.788279 4832 scope.go:117] "RemoveContainer" containerID="6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe" Mar 12 15:07:36 crc kubenswrapper[4832]: E0312 15:07:36.788918 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe\": container with ID starting with 6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe not found: ID does not exist" containerID="6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.788981 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe"} err="failed to get container status \"6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe\": rpc error: code = NotFound desc = could not find container \"6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe\": container with ID starting with 6203f42e2c33039c2f47736b6f8649b39b22159f92bc3761ccc76dd8ea39bbbe not found: ID does not exist" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.789002 4832 scope.go:117] "RemoveContainer" containerID="cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849" Mar 12 15:07:36 crc kubenswrapper[4832]: E0312 15:07:36.791349 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849\": container with ID starting with cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849 not found: ID does not exist" containerID="cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849" Mar 12 15:07:36 crc kubenswrapper[4832]: I0312 15:07:36.791576 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849"} err="failed to get container status \"cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849\": rpc error: code = NotFound desc = could not find container \"cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849\": container with ID starting with cc309d7541d187299bb521a1d4eccb89bb473d09a33fb46cc9bad4db0b4a9849 not found: ID does not exist" Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.714848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" event={"ID":"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36","Type":"ContainerStarted","Data":"68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5"} Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.715094 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.719596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d59457b0-c009-4522-9ac1-5709e17986dc","Type":"ContainerStarted","Data":"d2e66e90ba2a1a1b58aab5f009afcfa2210522635b1202aa0b751d99231c56af"} Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.719724 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.722518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55","Type":"ContainerStarted","Data":"5f15118b3e7be851af99d2f3ee6648073fd863a48b0d4af15e1ccd842682ba5b"} Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.722547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55","Type":"ContainerStarted","Data":"b9e0290ddfee11f0d47cf9df7175eac7484979be6fe045a49a07798f02c6a376"} Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.738859 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" podStartSLOduration=4.738840501 podStartE2EDuration="4.738840501s" podCreationTimestamp="2026-03-12 15:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:37.7315853 +0000 UTC m=+1216.375599536" watchObservedRunningTime="2026-03-12 15:07:37.738840501 +0000 UTC m=+1216.382854727" Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.761470 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.758497503 podStartE2EDuration="4.761453557s" podCreationTimestamp="2026-03-12 15:07:33 +0000 UTC" firstStartedPulling="2026-03-12 15:07:35.285789031 +0000 UTC m=+1213.929803257" lastFinishedPulling="2026-03-12 15:07:36.288745055 +0000 UTC m=+1214.932759311" observedRunningTime="2026-03-12 15:07:37.754914687 +0000 UTC m=+1216.398928913" watchObservedRunningTime="2026-03-12 15:07:37.761453557 +0000 UTC m=+1216.405467783" Mar 12 15:07:37 crc kubenswrapper[4832]: I0312 15:07:37.771888 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.771872899 podStartE2EDuration="3.771872899s" podCreationTimestamp="2026-03-12 15:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:37.768972145 +0000 UTC m=+1216.412986391" watchObservedRunningTime="2026-03-12 15:07:37.771872899 +0000 UTC m=+1216.415887115" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.058147 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.640344 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" path="/var/lib/kubelet/pods/957bf113-e0d7-4a03-a4dc-2968a44f66f6/volumes" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.739222 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76485774fd-8dtp8"] Mar 12 15:07:38 crc kubenswrapper[4832]: E0312 15:07:38.739655 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerName="dnsmasq-dns" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.739667 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerName="dnsmasq-dns" Mar 12 15:07:38 crc kubenswrapper[4832]: E0312 15:07:38.739681 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerName="init" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.739687 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerName="init" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.739843 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="957bf113-e0d7-4a03-a4dc-2968a44f66f6" containerName="dnsmasq-dns" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.740710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.743872 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.744421 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.751044 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76485774fd-8dtp8"] Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.832708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d751c81a-b91d-4849-a382-81b234d4c6c8-logs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.832807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-config-data-custom\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.832826 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-public-tls-certs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.832918 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-internal-tls-certs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.832945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-combined-ca-bundle\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.832992 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzc7c\" (UniqueName: \"kubernetes.io/projected/d751c81a-b91d-4849-a382-81b234d4c6c8-kube-api-access-mzc7c\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.833031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-config-data\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.933807 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-config-data-custom\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.933846 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-public-tls-certs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.933898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-internal-tls-certs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.933920 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-combined-ca-bundle\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.933953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzc7c\" (UniqueName: \"kubernetes.io/projected/d751c81a-b91d-4849-a382-81b234d4c6c8-kube-api-access-mzc7c\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.933977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-config-data\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.934041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d751c81a-b91d-4849-a382-81b234d4c6c8-logs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.934406 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d751c81a-b91d-4849-a382-81b234d4c6c8-logs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.939813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-public-tls-certs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.963186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-internal-tls-certs\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.963273 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-config-data-custom\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.963531 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzc7c\" (UniqueName: \"kubernetes.io/projected/d751c81a-b91d-4849-a382-81b234d4c6c8-kube-api-access-mzc7c\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.965119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-combined-ca-bundle\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:38 crc kubenswrapper[4832]: I0312 15:07:38.972697 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d751c81a-b91d-4849-a382-81b234d4c6c8-config-data\") pod \"barbican-api-76485774fd-8dtp8\" (UID: \"d751c81a-b91d-4849-a382-81b234d4c6c8\") " pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:39 crc kubenswrapper[4832]: I0312 15:07:39.062634 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:39 crc kubenswrapper[4832]: I0312 15:07:39.277208 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 15:07:39 crc kubenswrapper[4832]: I0312 15:07:39.678435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76485774fd-8dtp8"] Mar 12 15:07:39 crc kubenswrapper[4832]: W0312 15:07:39.695799 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd751c81a_b91d_4849_a382_81b234d4c6c8.slice/crio-5d88d5357c6008ea5e73b892a52f6c85d717ae6f0388c055d5ad3a7ac334eca2 WatchSource:0}: Error finding container 5d88d5357c6008ea5e73b892a52f6c85d717ae6f0388c055d5ad3a7ac334eca2: Status 404 returned error can't find the container with id 5d88d5357c6008ea5e73b892a52f6c85d717ae6f0388c055d5ad3a7ac334eca2 Mar 12 15:07:39 crc kubenswrapper[4832]: I0312 15:07:39.760091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76485774fd-8dtp8" event={"ID":"d751c81a-b91d-4849-a382-81b234d4c6c8","Type":"ContainerStarted","Data":"5d88d5357c6008ea5e73b892a52f6c85d717ae6f0388c055d5ad3a7ac334eca2"} Mar 12 15:07:39 crc kubenswrapper[4832]: I0312 15:07:39.760483 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api-log" containerID="cri-o://4ec758ebd9510535a3651b64775024e656fc291cf716b9dcffb6c8ad3bee2c48" gracePeriod=30 Mar 12 15:07:39 crc kubenswrapper[4832]: I0312 15:07:39.760534 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api" containerID="cri-o://d2e66e90ba2a1a1b58aab5f009afcfa2210522635b1202aa0b751d99231c56af" gracePeriod=30 Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.725478 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.790876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76485774fd-8dtp8" event={"ID":"d751c81a-b91d-4849-a382-81b234d4c6c8","Type":"ContainerStarted","Data":"d39242d5f606c518a490d527c6cc1d771e18f68cc2df25c816d7cbd4ee7cd0fa"} Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.790933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76485774fd-8dtp8" event={"ID":"d751c81a-b91d-4849-a382-81b234d4c6c8","Type":"ContainerStarted","Data":"488893d393740fc9b498d05d1b607e04970b019086bf96350d6dd2fff02404b3"} Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.791004 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.791278 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.796680 4832 generic.go:334] "Generic (PLEG): container finished" podID="d59457b0-c009-4522-9ac1-5709e17986dc" containerID="d2e66e90ba2a1a1b58aab5f009afcfa2210522635b1202aa0b751d99231c56af" exitCode=0 Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.796713 4832 generic.go:334] "Generic (PLEG): container finished" podID="d59457b0-c009-4522-9ac1-5709e17986dc" containerID="4ec758ebd9510535a3651b64775024e656fc291cf716b9dcffb6c8ad3bee2c48" exitCode=143 Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.796737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d59457b0-c009-4522-9ac1-5709e17986dc","Type":"ContainerDied","Data":"d2e66e90ba2a1a1b58aab5f009afcfa2210522635b1202aa0b751d99231c56af"} Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.796766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d59457b0-c009-4522-9ac1-5709e17986dc","Type":"ContainerDied","Data":"4ec758ebd9510535a3651b64775024e656fc291cf716b9dcffb6c8ad3bee2c48"} Mar 12 15:07:40 crc kubenswrapper[4832]: I0312 15:07:40.809950 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76485774fd-8dtp8" podStartSLOduration=2.809904017 podStartE2EDuration="2.809904017s" podCreationTimestamp="2026-03-12 15:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:40.807830137 +0000 UTC m=+1219.451844363" watchObservedRunningTime="2026-03-12 15:07:40.809904017 +0000 UTC m=+1219.453918243" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.011251 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8656449bc9-dm7zf"] Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.011760 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8656449bc9-dm7zf" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-api" containerID="cri-o://62f772491df4d89fdff5a4ac560ce1fb3ab77ab702b56b95053385553f2ed0ee" gracePeriod=30 Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.011800 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8656449bc9-dm7zf" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-httpd" containerID="cri-o://ae1039c0ad29e047c50e8ac376ed0cdbda2cd7d7a7d6a0a65cef98a14d8b2455" gracePeriod=30 Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.029465 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8656449bc9-dm7zf" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": EOF" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.051418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c8f87d6b5-dbffr"] Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.060598 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.086878 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c8f87d6b5-dbffr"] Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.087826 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-ovndb-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.087884 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-httpd-config\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.087956 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mflll\" (UniqueName: \"kubernetes.io/projected/fd3d8dc5-b55e-4c33-a15b-77741921f451-kube-api-access-mflll\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.087996 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-internal-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.088032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-public-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.088066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-config\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.088095 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-combined-ca-bundle\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.189549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mflll\" (UniqueName: \"kubernetes.io/projected/fd3d8dc5-b55e-4c33-a15b-77741921f451-kube-api-access-mflll\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.189614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-internal-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.189646 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-public-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.189683 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-config\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.189714 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-combined-ca-bundle\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.189748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-ovndb-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.189780 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-httpd-config\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.195854 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-httpd-config\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.220187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-public-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.224166 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-combined-ca-bundle\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.224250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-internal-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.224984 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-config\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.229734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3d8dc5-b55e-4c33-a15b-77741921f451-ovndb-tls-certs\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.242082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mflll\" (UniqueName: \"kubernetes.io/projected/fd3d8dc5-b55e-4c33-a15b-77741921f451-kube-api-access-mflll\") pod \"neutron-7c8f87d6b5-dbffr\" (UID: \"fd3d8dc5-b55e-4c33-a15b-77741921f451\") " pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.380024 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.522928 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.810977 4832 generic.go:334] "Generic (PLEG): container finished" podID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerID="ae1039c0ad29e047c50e8ac376ed0cdbda2cd7d7a7d6a0a65cef98a14d8b2455" exitCode=0 Mar 12 15:07:41 crc kubenswrapper[4832]: I0312 15:07:41.813296 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656449bc9-dm7zf" event={"ID":"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499","Type":"ContainerDied","Data":"ae1039c0ad29e047c50e8ac376ed0cdbda2cd7d7a7d6a0a65cef98a14d8b2455"} Mar 12 15:07:42 crc kubenswrapper[4832]: I0312 15:07:42.132165 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:07:42 crc kubenswrapper[4832]: I0312 15:07:42.804007 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8656449bc9-dm7zf" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Mar 12 15:07:43 crc kubenswrapper[4832]: I0312 15:07:43.295613 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:07:43 crc kubenswrapper[4832]: I0312 15:07:43.734620 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c5974b5d4-dhhm8" Mar 12 15:07:43 crc kubenswrapper[4832]: I0312 15:07:43.799186 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-745cdbf99b-kdz5c"] Mar 12 15:07:43 crc kubenswrapper[4832]: I0312 15:07:43.829769 4832 generic.go:334] "Generic (PLEG): container finished" podID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerID="62f772491df4d89fdff5a4ac560ce1fb3ab77ab702b56b95053385553f2ed0ee" exitCode=0 Mar 12 15:07:43 crc kubenswrapper[4832]: I0312 15:07:43.829953 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-745cdbf99b-kdz5c" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon-log" containerID="cri-o://7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007" gracePeriod=30 Mar 12 15:07:43 crc kubenswrapper[4832]: I0312 15:07:43.830188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656449bc9-dm7zf" event={"ID":"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499","Type":"ContainerDied","Data":"62f772491df4d89fdff5a4ac560ce1fb3ab77ab702b56b95053385553f2ed0ee"} Mar 12 15:07:43 crc kubenswrapper[4832]: I0312 15:07:43.830407 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-745cdbf99b-kdz5c" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" containerID="cri-o://ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf" gracePeriod=30 Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.075675 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.097109 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.390741 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.448799 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-nbwkf"] Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.449146 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerName="dnsmasq-dns" containerID="cri-o://c605f235ece560d14049a5912a947372f14557898d2aa9762c079ed01f9ed2e8" gracePeriod=10 Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.525640 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.565062 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.841078 4832 generic.go:334] "Generic (PLEG): container finished" podID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerID="c605f235ece560d14049a5912a947372f14557898d2aa9762c079ed01f9ed2e8" exitCode=0 Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.841157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" event={"ID":"4ef725f8-44da-46ad-8a0d-e5eed8cd6106","Type":"ContainerDied","Data":"c605f235ece560d14049a5912a947372f14557898d2aa9762c079ed01f9ed2e8"} Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.841314 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="cinder-scheduler" containerID="cri-o://b9e0290ddfee11f0d47cf9df7175eac7484979be6fe045a49a07798f02c6a376" gracePeriod=30 Mar 12 15:07:44 crc kubenswrapper[4832]: I0312 15:07:44.841638 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="probe" containerID="cri-o://5f15118b3e7be851af99d2f3ee6648073fd863a48b0d4af15e1ccd842682ba5b" gracePeriod=30 Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.281996 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.474772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-combined-ca-bundle\") pod \"d59457b0-c009-4522-9ac1-5709e17986dc\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.474868 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data-custom\") pod \"d59457b0-c009-4522-9ac1-5709e17986dc\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.474908 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d59457b0-c009-4522-9ac1-5709e17986dc-logs\") pod \"d59457b0-c009-4522-9ac1-5709e17986dc\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.474935 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data\") pod \"d59457b0-c009-4522-9ac1-5709e17986dc\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.475001 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29r45\" (UniqueName: \"kubernetes.io/projected/d59457b0-c009-4522-9ac1-5709e17986dc-kube-api-access-29r45\") pod \"d59457b0-c009-4522-9ac1-5709e17986dc\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.475116 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-scripts\") pod \"d59457b0-c009-4522-9ac1-5709e17986dc\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.475178 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d59457b0-c009-4522-9ac1-5709e17986dc-etc-machine-id\") pod \"d59457b0-c009-4522-9ac1-5709e17986dc\" (UID: \"d59457b0-c009-4522-9ac1-5709e17986dc\") " Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.475474 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d59457b0-c009-4522-9ac1-5709e17986dc-logs" (OuterVolumeSpecName: "logs") pod "d59457b0-c009-4522-9ac1-5709e17986dc" (UID: "d59457b0-c009-4522-9ac1-5709e17986dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.476358 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d59457b0-c009-4522-9ac1-5709e17986dc-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.476875 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d59457b0-c009-4522-9ac1-5709e17986dc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d59457b0-c009-4522-9ac1-5709e17986dc" (UID: "d59457b0-c009-4522-9ac1-5709e17986dc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.480794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59457b0-c009-4522-9ac1-5709e17986dc-kube-api-access-29r45" (OuterVolumeSpecName: "kube-api-access-29r45") pod "d59457b0-c009-4522-9ac1-5709e17986dc" (UID: "d59457b0-c009-4522-9ac1-5709e17986dc"). InnerVolumeSpecName "kube-api-access-29r45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.509955 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-scripts" (OuterVolumeSpecName: "scripts") pod "d59457b0-c009-4522-9ac1-5709e17986dc" (UID: "d59457b0-c009-4522-9ac1-5709e17986dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.510039 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d59457b0-c009-4522-9ac1-5709e17986dc" (UID: "d59457b0-c009-4522-9ac1-5709e17986dc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.533177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d59457b0-c009-4522-9ac1-5709e17986dc" (UID: "d59457b0-c009-4522-9ac1-5709e17986dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.554115 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data" (OuterVolumeSpecName: "config-data") pod "d59457b0-c009-4522-9ac1-5709e17986dc" (UID: "d59457b0-c009-4522-9ac1-5709e17986dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.578161 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.578205 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.578220 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.578234 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29r45\" (UniqueName: \"kubernetes.io/projected/d59457b0-c009-4522-9ac1-5709e17986dc-kube-api-access-29r45\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.578250 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59457b0-c009-4522-9ac1-5709e17986dc-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.578262 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d59457b0-c009-4522-9ac1-5709e17986dc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.634882 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.157:5353: connect: connection refused" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.693017 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.857023 4832 generic.go:334] "Generic (PLEG): container finished" podID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerID="5f15118b3e7be851af99d2f3ee6648073fd863a48b0d4af15e1ccd842682ba5b" exitCode=0 Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.857086 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55","Type":"ContainerDied","Data":"5f15118b3e7be851af99d2f3ee6648073fd863a48b0d4af15e1ccd842682ba5b"} Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.861234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d59457b0-c009-4522-9ac1-5709e17986dc","Type":"ContainerDied","Data":"1337dd0f50d7f487feb4c55b5e4d39ef840489ef3bbf7925b0a35f8759da16e0"} Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.861305 4832 scope.go:117] "RemoveContainer" containerID="d2e66e90ba2a1a1b58aab5f009afcfa2210522635b1202aa0b751d99231c56af" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.861567 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.904353 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.913456 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.939565 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:45 crc kubenswrapper[4832]: E0312 15:07:45.939973 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.939989 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api" Mar 12 15:07:45 crc kubenswrapper[4832]: E0312 15:07:45.940016 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api-log" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.940022 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api-log" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.940191 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.940221 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api-log" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.941140 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.949467 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.949732 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.949876 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 15:07:45 crc kubenswrapper[4832]: I0312 15:07:45.962230 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.084680 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-scripts\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.084966 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-logs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.085009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxpr\" (UniqueName: \"kubernetes.io/projected/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-kube-api-access-5zxpr\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.085047 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-config-data\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.085066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.085106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.085153 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.085167 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.085205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-config-data-custom\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-config-data-custom\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-scripts\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-logs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zxpr\" (UniqueName: \"kubernetes.io/projected/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-kube-api-access-5zxpr\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-config-data\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.186889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.187809 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-logs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.191059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-config-data-custom\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.192607 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-config-data\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.193969 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.202245 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-scripts\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.202376 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.202656 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.206421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zxpr\" (UniqueName: \"kubernetes.io/projected/e85b97b4-a179-4d8b-bb70-86bc2ae08d70-kube-api-access-5zxpr\") pod \"cinder-api-0\" (UID: \"e85b97b4-a179-4d8b-bb70-86bc2ae08d70\") " pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.260660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.279605 4832 scope.go:117] "RemoveContainer" containerID="4ec758ebd9510535a3651b64775024e656fc291cf716b9dcffb6c8ad3bee2c48" Mar 12 15:07:46 crc kubenswrapper[4832]: I0312 15:07:46.630162 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" path="/var/lib/kubelet/pods/d59457b0-c009-4522-9ac1-5709e17986dc/volumes" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.172665 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76485774fd-8dtp8" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.238873 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f7c7fbfc6-5ms49"] Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.239097 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api-log" containerID="cri-o://914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54" gracePeriod=30 Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.239917 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api" containerID="cri-o://1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050" gracePeriod=30 Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.371391 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.499828 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.517577 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-swift-storage-0\") pod \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.517860 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-sb\") pod \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.517942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngkft\" (UniqueName: \"kubernetes.io/projected/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-kube-api-access-ngkft\") pod \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.518336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-config\") pod \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.518405 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-nb\") pod \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.518517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-svc\") pod \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\" (UID: \"4ef725f8-44da-46ad-8a0d-e5eed8cd6106\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.537677 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-kube-api-access-ngkft" (OuterVolumeSpecName: "kube-api-access-ngkft") pod "4ef725f8-44da-46ad-8a0d-e5eed8cd6106" (UID: "4ef725f8-44da-46ad-8a0d-e5eed8cd6106"). InnerVolumeSpecName "kube-api-access-ngkft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.571916 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ef725f8-44da-46ad-8a0d-e5eed8cd6106" (UID: "4ef725f8-44da-46ad-8a0d-e5eed8cd6106"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.577735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ef725f8-44da-46ad-8a0d-e5eed8cd6106" (UID: "4ef725f8-44da-46ad-8a0d-e5eed8cd6106"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.579627 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-config" (OuterVolumeSpecName: "config") pod "4ef725f8-44da-46ad-8a0d-e5eed8cd6106" (UID: "4ef725f8-44da-46ad-8a0d-e5eed8cd6106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.588177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ef725f8-44da-46ad-8a0d-e5eed8cd6106" (UID: "4ef725f8-44da-46ad-8a0d-e5eed8cd6106"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.589009 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4ef725f8-44da-46ad-8a0d-e5eed8cd6106" (UID: "4ef725f8-44da-46ad-8a0d-e5eed8cd6106"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.621785 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-httpd-config\") pod \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.621961 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-config\") pod \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.621977 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-public-tls-certs\") pod \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.621997 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-combined-ca-bundle\") pod \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622031 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-ovndb-tls-certs\") pod \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622096 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-internal-tls-certs\") pod \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvm85\" (UniqueName: \"kubernetes.io/projected/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-kube-api-access-zvm85\") pod \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\" (UID: \"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499\") " Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622565 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622583 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622595 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngkft\" (UniqueName: \"kubernetes.io/projected/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-kube-api-access-ngkft\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622607 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622619 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.622628 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef725f8-44da-46ad-8a0d-e5eed8cd6106-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.628959 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-kube-api-access-zvm85" (OuterVolumeSpecName: "kube-api-access-zvm85") pod "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" (UID: "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499"). InnerVolumeSpecName "kube-api-access-zvm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.630102 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" (UID: "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.669708 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" (UID: "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.684497 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" (UID: "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.690688 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-config" (OuterVolumeSpecName: "config") pod "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" (UID: "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.707700 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" (UID: "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.715403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" (UID: "4dd2bbb8-22a1-48e3-b4fd-7869bb93e499"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.725006 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.725036 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.725048 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.725056 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.725063 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.725072 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvm85\" (UniqueName: \"kubernetes.io/projected/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-kube-api-access-zvm85\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.725079 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.751476 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.841131 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c8f87d6b5-dbffr"] Mar 12 15:07:47 crc kubenswrapper[4832]: W0312 15:07:47.856833 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3d8dc5_b55e_4c33_a15b_77741921f451.slice/crio-f8d2aa9bf9a9b6c5b613024b78fc62283cba24f2bd63f002c2780e55c81eae1e WatchSource:0}: Error finding container f8d2aa9bf9a9b6c5b613024b78fc62283cba24f2bd63f002c2780e55c81eae1e: Status 404 returned error can't find the container with id f8d2aa9bf9a9b6c5b613024b78fc62283cba24f2bd63f002c2780e55c81eae1e Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.892906 4832 generic.go:334] "Generic (PLEG): container finished" podID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerID="914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54" exitCode=143 Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.892966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" event={"ID":"a7becc0c-3efc-4992-ab31-6c67fd190769","Type":"ContainerDied","Data":"914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.894928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656449bc9-dm7zf" event={"ID":"4dd2bbb8-22a1-48e3-b4fd-7869bb93e499","Type":"ContainerDied","Data":"c4f4c31a977e475c6caf914359a33c7025cc8c189960abb5cced5b08e9a73405"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.894960 4832 scope.go:117] "RemoveContainer" containerID="ae1039c0ad29e047c50e8ac376ed0cdbda2cd7d7a7d6a0a65cef98a14d8b2455" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.895045 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656449bc9-dm7zf" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.897480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e85b97b4-a179-4d8b-bb70-86bc2ae08d70","Type":"ContainerStarted","Data":"92d75a85042b2cf47049bfc9816e377e02e95add3f3f71da38b3799620af1a7e"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.909214 4832 generic.go:334] "Generic (PLEG): container finished" podID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerID="b9e0290ddfee11f0d47cf9df7175eac7484979be6fe045a49a07798f02c6a376" exitCode=0 Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.909276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55","Type":"ContainerDied","Data":"b9e0290ddfee11f0d47cf9df7175eac7484979be6fe045a49a07798f02c6a376"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.910911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c8f87d6b5-dbffr" event={"ID":"fd3d8dc5-b55e-4c33-a15b-77741921f451","Type":"ContainerStarted","Data":"f8d2aa9bf9a9b6c5b613024b78fc62283cba24f2bd63f002c2780e55c81eae1e"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.914979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" event={"ID":"4ef725f8-44da-46ad-8a0d-e5eed8cd6106","Type":"ContainerDied","Data":"e442ea0b2d09641d4702b5822c5ed6f8bb9729b7b6a6ec2777b7f041a397b306"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.915041 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-nbwkf" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.955583 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8656449bc9-dm7zf"] Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.956466 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerStarted","Data":"c65b34d94b934b0d27407af16f002e76b75c46b527fd763b26a10bb4ba7605ab"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.957039 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.966885 4832 generic.go:334] "Generic (PLEG): container finished" podID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerID="ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf" exitCode=0 Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.966933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-745cdbf99b-kdz5c" event={"ID":"7b32181c-0268-4e3e-8b7b-f2811720ce58","Type":"ContainerDied","Data":"ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf"} Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.970691 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8656449bc9-dm7zf"] Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.979946 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-nbwkf"] Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.984802 4832 scope.go:117] "RemoveContainer" containerID="62f772491df4d89fdff5a4ac560ce1fb3ab77ab702b56b95053385553f2ed0ee" Mar 12 15:07:47 crc kubenswrapper[4832]: I0312 15:07:47.998458 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-nbwkf"] Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.013522 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.476265693 podStartE2EDuration="18.013490022s" podCreationTimestamp="2026-03-12 15:07:30 +0000 UTC" firstStartedPulling="2026-03-12 15:07:31.511020812 +0000 UTC m=+1210.155035058" lastFinishedPulling="2026-03-12 15:07:47.048245161 +0000 UTC m=+1225.692259387" observedRunningTime="2026-03-12 15:07:47.981257797 +0000 UTC m=+1226.625272023" watchObservedRunningTime="2026-03-12 15:07:48.013490022 +0000 UTC m=+1226.657504248" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.019697 4832 scope.go:117] "RemoveContainer" containerID="c605f235ece560d14049a5912a947372f14557898d2aa9762c079ed01f9ed2e8" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.061337 4832 scope.go:117] "RemoveContainer" containerID="cc292c3054618677f5e476ef1fc1082bfb575bca8a861212a8c0ad4ea3b11ae2" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.185398 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.346209 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-combined-ca-bundle\") pod \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.346587 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbvmb\" (UniqueName: \"kubernetes.io/projected/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-kube-api-access-kbvmb\") pod \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.346652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data-custom\") pod \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.346698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-etc-machine-id\") pod \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.346803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-scripts\") pod \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.346858 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data\") pod \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\" (UID: \"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55\") " Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.347017 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" (UID: "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.347481 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.361777 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-scripts" (OuterVolumeSpecName: "scripts") pod "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" (UID: "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.361858 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-kube-api-access-kbvmb" (OuterVolumeSpecName: "kube-api-access-kbvmb") pod "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" (UID: "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55"). InnerVolumeSpecName "kube-api-access-kbvmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.368311 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" (UID: "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.446669 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" (UID: "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.460780 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.460817 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.460829 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbvmb\" (UniqueName: \"kubernetes.io/projected/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-kube-api-access-kbvmb\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.460838 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.506943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data" (OuterVolumeSpecName: "config-data") pod "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" (UID: "00f04fdf-70d4-43a9-9f36-b3c3d0c36b55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.562578 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.633975 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" path="/var/lib/kubelet/pods/4dd2bbb8-22a1-48e3-b4fd-7869bb93e499/volumes" Mar 12 15:07:48 crc kubenswrapper[4832]: I0312 15:07:48.634566 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" path="/var/lib/kubelet/pods/4ef725f8-44da-46ad-8a0d-e5eed8cd6106/volumes" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.008485 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e85b97b4-a179-4d8b-bb70-86bc2ae08d70","Type":"ContainerStarted","Data":"87cf092d8e04a94da2ccb8966ec34a6813e1259583a046657e8f572a19622128"} Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.045350 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c8f87d6b5-dbffr" event={"ID":"fd3d8dc5-b55e-4c33-a15b-77741921f451","Type":"ContainerStarted","Data":"2e4b9fffca4b2d8b42148102f1381f18996df9f93650c27b4daf4ab8c4aeece3"} Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.045420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c8f87d6b5-dbffr" event={"ID":"fd3d8dc5-b55e-4c33-a15b-77741921f451","Type":"ContainerStarted","Data":"bbf23c79b18a886259e865df6d8cc5280498efca410a6ceaae68751472a440d6"} Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.045482 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.061668 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.062112 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00f04fdf-70d4-43a9-9f36-b3c3d0c36b55","Type":"ContainerDied","Data":"a7fe8e14af882ffe1efa1f373507e8bc0d5e54e7e37a1e54c4df3d02f445540e"} Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.062164 4832 scope.go:117] "RemoveContainer" containerID="5f15118b3e7be851af99d2f3ee6648073fd863a48b0d4af15e1ccd842682ba5b" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.082860 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c8f87d6b5-dbffr" podStartSLOduration=8.082835042 podStartE2EDuration="8.082835042s" podCreationTimestamp="2026-03-12 15:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:49.075709406 +0000 UTC m=+1227.719723632" watchObservedRunningTime="2026-03-12 15:07:49.082835042 +0000 UTC m=+1227.726849278" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.109555 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.121714 4832 scope.go:117] "RemoveContainer" containerID="b9e0290ddfee11f0d47cf9df7175eac7484979be6fe045a49a07798f02c6a376" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.121848 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.143646 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:49 crc kubenswrapper[4832]: E0312 15:07:49.144083 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerName="init" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144095 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerName="init" Mar 12 15:07:49 crc kubenswrapper[4832]: E0312 15:07:49.144110 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-api" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144118 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-api" Mar 12 15:07:49 crc kubenswrapper[4832]: E0312 15:07:49.144134 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="probe" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144140 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="probe" Mar 12 15:07:49 crc kubenswrapper[4832]: E0312 15:07:49.144153 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-httpd" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144159 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-httpd" Mar 12 15:07:49 crc kubenswrapper[4832]: E0312 15:07:49.144178 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerName="dnsmasq-dns" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144186 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerName="dnsmasq-dns" Mar 12 15:07:49 crc kubenswrapper[4832]: E0312 15:07:49.144197 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="cinder-scheduler" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144206 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="cinder-scheduler" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144387 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="probe" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144401 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" containerName="cinder-scheduler" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144411 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-api" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144428 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef725f8-44da-46ad-8a0d-e5eed8cd6106" containerName="dnsmasq-dns" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.144456 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd2bbb8-22a1-48e3-b4fd-7869bb93e499" containerName="neutron-httpd" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.145400 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.149481 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.165400 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.280454 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-scripts\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.280758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.280778 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.280826 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eca6fd5d-1b83-4ff3-8216-41f81b29555f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.280852 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fbn\" (UniqueName: \"kubernetes.io/projected/eca6fd5d-1b83-4ff3-8216-41f81b29555f-kube-api-access-58fbn\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.280895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-config-data\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.381919 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-scripts\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.381971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.381998 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.382027 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eca6fd5d-1b83-4ff3-8216-41f81b29555f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.382045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58fbn\" (UniqueName: \"kubernetes.io/projected/eca6fd5d-1b83-4ff3-8216-41f81b29555f-kube-api-access-58fbn\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.382074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-config-data\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.385747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eca6fd5d-1b83-4ff3-8216-41f81b29555f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.390689 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-config-data\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.391498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.395278 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.402107 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca6fd5d-1b83-4ff3-8216-41f81b29555f-scripts\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.407975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fbn\" (UniqueName: \"kubernetes.io/projected/eca6fd5d-1b83-4ff3-8216-41f81b29555f-kube-api-access-58fbn\") pod \"cinder-scheduler-0\" (UID: \"eca6fd5d-1b83-4ff3-8216-41f81b29555f\") " pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.499993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.530199 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-745cdbf99b-kdz5c" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.663051 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d59457b0-c009-4522-9ac1-5709e17986dc" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.172:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 15:07:49 crc kubenswrapper[4832]: I0312 15:07:49.985905 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.116728 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e85b97b4-a179-4d8b-bb70-86bc2ae08d70","Type":"ContainerStarted","Data":"83bf8004912ce35d2d6747b71a8d0b2c496ce304542617414050964c4095a78c"} Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.118098 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.125686 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eca6fd5d-1b83-4ff3-8216-41f81b29555f","Type":"ContainerStarted","Data":"f6bf499d931c795721b57f4f14a8b74a590f76a950d439b4e7e69c8b219cf453"} Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.145151 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.145132917 podStartE2EDuration="5.145132917s" podCreationTimestamp="2026-03-12 15:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:50.134950042 +0000 UTC m=+1228.778964258" watchObservedRunningTime="2026-03-12 15:07:50.145132917 +0000 UTC m=+1228.789147143" Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.400867 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.430833 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": read tcp 10.217.0.2:55092->10.217.0.169:9311: read: connection reset by peer" Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.430842 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": read tcp 10.217.0.2:55094->10.217.0.169:9311: read: connection reset by peer" Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.466186 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:50 crc kubenswrapper[4832]: I0312 15:07:50.644335 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f04fdf-70d4-43a9-9f36-b3c3d0c36b55" path="/var/lib/kubelet/pods/00f04fdf-70d4-43a9-9f36-b3c3d0c36b55/volumes" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.047845 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.093478 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.108048 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58b9f48778-gcmpc" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.165164 4832 generic.go:334] "Generic (PLEG): container finished" podID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerID="1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050" exitCode=0 Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.165234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" event={"ID":"a7becc0c-3efc-4992-ab31-6c67fd190769","Type":"ContainerDied","Data":"1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050"} Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.165262 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" event={"ID":"a7becc0c-3efc-4992-ab31-6c67fd190769","Type":"ContainerDied","Data":"96b235365eae397e46c7d51da0751eba2936a6496b14369898c4fbb4408bebf2"} Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.165278 4832 scope.go:117] "RemoveContainer" containerID="1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.165335 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f7c7fbfc6-5ms49" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.191952 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5578794dcb-v62kh"] Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.216275 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eca6fd5d-1b83-4ff3-8216-41f81b29555f","Type":"ContainerStarted","Data":"00f613587d1baf48820b22f920fe5aff6f7df741d11279b6920ec7469078f99f"} Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.240128 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data-custom\") pod \"a7becc0c-3efc-4992-ab31-6c67fd190769\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.240558 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-combined-ca-bundle\") pod \"a7becc0c-3efc-4992-ab31-6c67fd190769\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.240722 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7becc0c-3efc-4992-ab31-6c67fd190769-logs\") pod \"a7becc0c-3efc-4992-ab31-6c67fd190769\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.240763 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjmhl\" (UniqueName: \"kubernetes.io/projected/a7becc0c-3efc-4992-ab31-6c67fd190769-kube-api-access-pjmhl\") pod \"a7becc0c-3efc-4992-ab31-6c67fd190769\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.240812 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data\") pod \"a7becc0c-3efc-4992-ab31-6c67fd190769\" (UID: \"a7becc0c-3efc-4992-ab31-6c67fd190769\") " Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.244418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7becc0c-3efc-4992-ab31-6c67fd190769-logs" (OuterVolumeSpecName: "logs") pod "a7becc0c-3efc-4992-ab31-6c67fd190769" (UID: "a7becc0c-3efc-4992-ab31-6c67fd190769"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.257973 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a7becc0c-3efc-4992-ab31-6c67fd190769" (UID: "a7becc0c-3efc-4992-ab31-6c67fd190769"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.268969 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7becc0c-3efc-4992-ab31-6c67fd190769-kube-api-access-pjmhl" (OuterVolumeSpecName: "kube-api-access-pjmhl") pod "a7becc0c-3efc-4992-ab31-6c67fd190769" (UID: "a7becc0c-3efc-4992-ab31-6c67fd190769"). InnerVolumeSpecName "kube-api-access-pjmhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.271705 4832 scope.go:117] "RemoveContainer" containerID="914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.335209 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data" (OuterVolumeSpecName: "config-data") pod "a7becc0c-3efc-4992-ab31-6c67fd190769" (UID: "a7becc0c-3efc-4992-ab31-6c67fd190769"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.336685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7becc0c-3efc-4992-ab31-6c67fd190769" (UID: "a7becc0c-3efc-4992-ab31-6c67fd190769"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.347733 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7becc0c-3efc-4992-ab31-6c67fd190769-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.347774 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjmhl\" (UniqueName: \"kubernetes.io/projected/a7becc0c-3efc-4992-ab31-6c67fd190769-kube-api-access-pjmhl\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.347791 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.347815 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.347828 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7becc0c-3efc-4992-ab31-6c67fd190769-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.409366 4832 scope.go:117] "RemoveContainer" containerID="1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050" Mar 12 15:07:51 crc kubenswrapper[4832]: E0312 15:07:51.409887 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050\": container with ID starting with 1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050 not found: ID does not exist" containerID="1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.409921 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050"} err="failed to get container status \"1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050\": rpc error: code = NotFound desc = could not find container \"1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050\": container with ID starting with 1d02d5b61d494ddf85c1889fec126c43bd143dd15c9cd1731536fd1ecac64050 not found: ID does not exist" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.409946 4832 scope.go:117] "RemoveContainer" containerID="914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54" Mar 12 15:07:51 crc kubenswrapper[4832]: E0312 15:07:51.413657 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54\": container with ID starting with 914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54 not found: ID does not exist" containerID="914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.413701 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54"} err="failed to get container status \"914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54\": rpc error: code = NotFound desc = could not find container \"914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54\": container with ID starting with 914738b41af1b673ad4af843792ed4ce988afba46843df20a03484c5fb77fa54 not found: ID does not exist" Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.513942 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f7c7fbfc6-5ms49"] Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.534317 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f7c7fbfc6-5ms49"] Mar 12 15:07:51 crc kubenswrapper[4832]: I0312 15:07:51.834835 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-84975bc55b-p4rz5" Mar 12 15:07:52 crc kubenswrapper[4832]: I0312 15:07:52.226395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eca6fd5d-1b83-4ff3-8216-41f81b29555f","Type":"ContainerStarted","Data":"801c4cd9e140d3776c31f66a494c717201dff3ed0aaf078960e7e5d7d09e7ec9"} Mar 12 15:07:52 crc kubenswrapper[4832]: I0312 15:07:52.226710 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5578794dcb-v62kh" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-log" containerID="cri-o://f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212" gracePeriod=30 Mar 12 15:07:52 crc kubenswrapper[4832]: I0312 15:07:52.226774 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5578794dcb-v62kh" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-api" containerID="cri-o://5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2" gracePeriod=30 Mar 12 15:07:52 crc kubenswrapper[4832]: I0312 15:07:52.252692 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.252668487 podStartE2EDuration="3.252668487s" podCreationTimestamp="2026-03-12 15:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:07:52.247749706 +0000 UTC m=+1230.891763932" watchObservedRunningTime="2026-03-12 15:07:52.252668487 +0000 UTC m=+1230.896682713" Mar 12 15:07:52 crc kubenswrapper[4832]: I0312 15:07:52.630096 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" path="/var/lib/kubelet/pods/a7becc0c-3efc-4992-ab31-6c67fd190769/volumes" Mar 12 15:07:53 crc kubenswrapper[4832]: I0312 15:07:53.242499 4832 generic.go:334] "Generic (PLEG): container finished" podID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerID="f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212" exitCode=143 Mar 12 15:07:53 crc kubenswrapper[4832]: I0312 15:07:53.242586 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5578794dcb-v62kh" event={"ID":"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c","Type":"ContainerDied","Data":"f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212"} Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.500795 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.836799 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 15:07:54 crc kubenswrapper[4832]: E0312 15:07:54.837725 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api-log" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.837817 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api-log" Mar 12 15:07:54 crc kubenswrapper[4832]: E0312 15:07:54.837891 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.837950 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.838171 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api-log" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.838240 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7becc0c-3efc-4992-ab31-6c67fd190769" containerName="barbican-api" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.838857 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.842989 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.843034 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.843523 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lmkvc" Mar 12 15:07:54 crc kubenswrapper[4832]: I0312 15:07:54.852557 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.022761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0988fee7-998d-4cf1-9740-9ccbdc012168-openstack-config\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.022821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq72n\" (UniqueName: \"kubernetes.io/projected/0988fee7-998d-4cf1-9740-9ccbdc012168-kube-api-access-vq72n\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.022891 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0988fee7-998d-4cf1-9740-9ccbdc012168-openstack-config-secret\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.022913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0988fee7-998d-4cf1-9740-9ccbdc012168-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.125685 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0988fee7-998d-4cf1-9740-9ccbdc012168-openstack-config\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.125774 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq72n\" (UniqueName: \"kubernetes.io/projected/0988fee7-998d-4cf1-9740-9ccbdc012168-kube-api-access-vq72n\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.125870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0988fee7-998d-4cf1-9740-9ccbdc012168-openstack-config-secret\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.125922 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0988fee7-998d-4cf1-9740-9ccbdc012168-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.127732 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0988fee7-998d-4cf1-9740-9ccbdc012168-openstack-config\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.133797 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0988fee7-998d-4cf1-9740-9ccbdc012168-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.142124 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0988fee7-998d-4cf1-9740-9ccbdc012168-openstack-config-secret\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.147032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq72n\" (UniqueName: \"kubernetes.io/projected/0988fee7-998d-4cf1-9740-9ccbdc012168-kube-api-access-vq72n\") pod \"openstackclient\" (UID: \"0988fee7-998d-4cf1-9740-9ccbdc012168\") " pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.174868 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.691694 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.756053 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.859934 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-config-data\") pod \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.860068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-combined-ca-bundle\") pod \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.860158 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-public-tls-certs\") pod \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.860180 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kv9n\" (UniqueName: \"kubernetes.io/projected/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-kube-api-access-8kv9n\") pod \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.860237 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-logs\") pod \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.860305 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-internal-tls-certs\") pod \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.860351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-scripts\") pod \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\" (UID: \"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c\") " Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.860910 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-logs" (OuterVolumeSpecName: "logs") pod "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" (UID: "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.869251 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-scripts" (OuterVolumeSpecName: "scripts") pod "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" (UID: "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.873245 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-kube-api-access-8kv9n" (OuterVolumeSpecName: "kube-api-access-8kv9n") pod "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" (UID: "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c"). InnerVolumeSpecName "kube-api-access-8kv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.944719 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-config-data" (OuterVolumeSpecName: "config-data") pod "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" (UID: "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.948977 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" (UID: "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.962392 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kv9n\" (UniqueName: \"kubernetes.io/projected/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-kube-api-access-8kv9n\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.962414 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.962425 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.962433 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.962443 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.975082 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" (UID: "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:55 crc kubenswrapper[4832]: I0312 15:07:55.978708 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" (UID: "c7262d72-51ea-48ae-8dc5-c0cb0d46f69c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.064911 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.064960 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.279404 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0988fee7-998d-4cf1-9740-9ccbdc012168","Type":"ContainerStarted","Data":"dccf3523fce25ab627c7737009cf784a601ac46436326a8c9a91fcb3fd401fb8"} Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.281548 4832 generic.go:334] "Generic (PLEG): container finished" podID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerID="5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2" exitCode=0 Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.281588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5578794dcb-v62kh" event={"ID":"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c","Type":"ContainerDied","Data":"5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2"} Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.281620 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5578794dcb-v62kh" event={"ID":"c7262d72-51ea-48ae-8dc5-c0cb0d46f69c","Type":"ContainerDied","Data":"f32552e418d4536c8bb6235d250e594cb6787370663d4c9a5f34c265979f7897"} Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.281680 4832 scope.go:117] "RemoveContainer" containerID="5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.281835 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5578794dcb-v62kh" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.313848 4832 scope.go:117] "RemoveContainer" containerID="f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.318816 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.318863 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.318902 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.319894 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5ef664f80b54949b800723b7418c7329f135481aab0e1a581de0cbcca235b5b"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.324428 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://b5ef664f80b54949b800723b7418c7329f135481aab0e1a581de0cbcca235b5b" gracePeriod=600 Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.335947 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5578794dcb-v62kh"] Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.348403 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5578794dcb-v62kh"] Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.369103 4832 scope.go:117] "RemoveContainer" containerID="5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2" Mar 12 15:07:56 crc kubenswrapper[4832]: E0312 15:07:56.370690 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2\": container with ID starting with 5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2 not found: ID does not exist" containerID="5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.370735 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2"} err="failed to get container status \"5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2\": rpc error: code = NotFound desc = could not find container \"5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2\": container with ID starting with 5180cc10c743765f86eb70821a11e9f4a556a726df55e21ddecbf3ad3dfa3ae2 not found: ID does not exist" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.370767 4832 scope.go:117] "RemoveContainer" containerID="f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212" Mar 12 15:07:56 crc kubenswrapper[4832]: E0312 15:07:56.371222 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212\": container with ID starting with f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212 not found: ID does not exist" containerID="f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.371256 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212"} err="failed to get container status \"f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212\": rpc error: code = NotFound desc = could not find container \"f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212\": container with ID starting with f77e13c5e1118df9cbc733128dbb6422c40c322c824f95f316e5c55d1147e212 not found: ID does not exist" Mar 12 15:07:56 crc kubenswrapper[4832]: I0312 15:07:56.633310 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" path="/var/lib/kubelet/pods/c7262d72-51ea-48ae-8dc5-c0cb0d46f69c/volumes" Mar 12 15:07:57 crc kubenswrapper[4832]: I0312 15:07:57.308487 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="b5ef664f80b54949b800723b7418c7329f135481aab0e1a581de0cbcca235b5b" exitCode=0 Mar 12 15:07:57 crc kubenswrapper[4832]: I0312 15:07:57.308534 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"b5ef664f80b54949b800723b7418c7329f135481aab0e1a581de0cbcca235b5b"} Mar 12 15:07:57 crc kubenswrapper[4832]: I0312 15:07:57.308833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"ce9bdbc63a202a02b8c581188fe9e262cc502ac13b76d8ec31fd67aea15a9bba"} Mar 12 15:07:57 crc kubenswrapper[4832]: I0312 15:07:57.308855 4832 scope.go:117] "RemoveContainer" containerID="06828ac09576ddc400ed9fc2eeb342e216438273fb168f6b943b24fb1b40966f" Mar 12 15:07:58 crc kubenswrapper[4832]: I0312 15:07:58.214218 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 12 15:07:59 crc kubenswrapper[4832]: I0312 15:07:59.531767 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-745cdbf99b-kdz5c" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 12 15:07:59 crc kubenswrapper[4832]: I0312 15:07:59.736224 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.047642 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5b9648779f-rg6wj"] Mar 12 15:08:00 crc kubenswrapper[4832]: E0312 15:08:00.048231 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-api" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.048246 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-api" Mar 12 15:08:00 crc kubenswrapper[4832]: E0312 15:08:00.048268 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-log" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.048274 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-log" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.048467 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-log" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.048492 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7262d72-51ea-48ae-8dc5-c0cb0d46f69c" containerName="placement-api" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.049390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.051200 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.057753 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.058108 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.059285 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b9648779f-rg6wj"] Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.143290 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555468-dnh7b"] Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.144491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-dnh7b" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.148427 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.148646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.150564 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.157695 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-dnh7b"] Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161016 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-config-data\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/969337ac-7543-4b59-820e-61408d5af0c3-log-httpd\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161093 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-public-tls-certs\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xsgg\" (UniqueName: \"kubernetes.io/projected/969337ac-7543-4b59-820e-61408d5af0c3-kube-api-access-7xsgg\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161174 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/969337ac-7543-4b59-820e-61408d5af0c3-run-httpd\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-internal-tls-certs\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-combined-ca-bundle\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.161257 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/969337ac-7543-4b59-820e-61408d5af0c3-etc-swift\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-config-data\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/969337ac-7543-4b59-820e-61408d5af0c3-log-httpd\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w767t\" (UniqueName: \"kubernetes.io/projected/fa03d1b7-d97e-4806-86c9-8f77ce37f5bf-kube-api-access-w767t\") pod \"auto-csr-approver-29555468-dnh7b\" (UID: \"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf\") " pod="openshift-infra/auto-csr-approver-29555468-dnh7b" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-public-tls-certs\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xsgg\" (UniqueName: \"kubernetes.io/projected/969337ac-7543-4b59-820e-61408d5af0c3-kube-api-access-7xsgg\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262738 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/969337ac-7543-4b59-820e-61408d5af0c3-run-httpd\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-internal-tls-certs\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-combined-ca-bundle\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.262818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/969337ac-7543-4b59-820e-61408d5af0c3-etc-swift\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.264159 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/969337ac-7543-4b59-820e-61408d5af0c3-log-httpd\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.264191 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/969337ac-7543-4b59-820e-61408d5af0c3-run-httpd\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.268801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-config-data\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.269387 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-public-tls-certs\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.269457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-internal-tls-certs\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.270059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/969337ac-7543-4b59-820e-61408d5af0c3-etc-swift\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.270106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/969337ac-7543-4b59-820e-61408d5af0c3-combined-ca-bundle\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.284389 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xsgg\" (UniqueName: \"kubernetes.io/projected/969337ac-7543-4b59-820e-61408d5af0c3-kube-api-access-7xsgg\") pod \"swift-proxy-5b9648779f-rg6wj\" (UID: \"969337ac-7543-4b59-820e-61408d5af0c3\") " pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.364070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w767t\" (UniqueName: \"kubernetes.io/projected/fa03d1b7-d97e-4806-86c9-8f77ce37f5bf-kube-api-access-w767t\") pod \"auto-csr-approver-29555468-dnh7b\" (UID: \"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf\") " pod="openshift-infra/auto-csr-approver-29555468-dnh7b" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.378938 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w767t\" (UniqueName: \"kubernetes.io/projected/fa03d1b7-d97e-4806-86c9-8f77ce37f5bf-kube-api-access-w767t\") pod \"auto-csr-approver-29555468-dnh7b\" (UID: \"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf\") " pod="openshift-infra/auto-csr-approver-29555468-dnh7b" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.385856 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.460806 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-dnh7b" Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.511709 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.512088 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="sg-core" containerID="cri-o://5f3bd740415bbd833dfadec8f2fa85f199c0de8f1aa08f389f3fc7cb1a5ece1c" gracePeriod=30 Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.512231 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="proxy-httpd" containerID="cri-o://c65b34d94b934b0d27407af16f002e76b75c46b527fd763b26a10bb4ba7605ab" gracePeriod=30 Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.512227 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-notification-agent" containerID="cri-o://e14f155849ccf32d23a85d174aa6ae6ae743f5b69f1f8002159a0126c33000f0" gracePeriod=30 Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.511991 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-central-agent" containerID="cri-o://c0fe9aec06c6585df3e5716ff58fdf792efa19a99db7c42bdd5c820ec5e575cb" gracePeriod=30 Mar 12 15:08:00 crc kubenswrapper[4832]: I0312 15:08:00.517761 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.005080 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.165:3000/\": dial tcp 10.217.0.165:3000: connect: connection refused" Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357799 4832 generic.go:334] "Generic (PLEG): container finished" podID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerID="c65b34d94b934b0d27407af16f002e76b75c46b527fd763b26a10bb4ba7605ab" exitCode=0 Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357840 4832 generic.go:334] "Generic (PLEG): container finished" podID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerID="5f3bd740415bbd833dfadec8f2fa85f199c0de8f1aa08f389f3fc7cb1a5ece1c" exitCode=2 Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357851 4832 generic.go:334] "Generic (PLEG): container finished" podID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerID="e14f155849ccf32d23a85d174aa6ae6ae743f5b69f1f8002159a0126c33000f0" exitCode=0 Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357860 4832 generic.go:334] "Generic (PLEG): container finished" podID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerID="c0fe9aec06c6585df3e5716ff58fdf792efa19a99db7c42bdd5c820ec5e575cb" exitCode=0 Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerDied","Data":"c65b34d94b934b0d27407af16f002e76b75c46b527fd763b26a10bb4ba7605ab"} Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357942 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerDied","Data":"5f3bd740415bbd833dfadec8f2fa85f199c0de8f1aa08f389f3fc7cb1a5ece1c"} Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerDied","Data":"e14f155849ccf32d23a85d174aa6ae6ae743f5b69f1f8002159a0126c33000f0"} Mar 12 15:08:01 crc kubenswrapper[4832]: I0312 15:08:01.357972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerDied","Data":"c0fe9aec06c6585df3e5716ff58fdf792efa19a99db7c42bdd5c820ec5e575cb"} Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.293751 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.376121 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cn8s\" (UniqueName: \"kubernetes.io/projected/b235a23d-a7da-4545-8047-36a5c88b66bb-kube-api-access-7cn8s\") pod \"b235a23d-a7da-4545-8047-36a5c88b66bb\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.376206 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-log-httpd\") pod \"b235a23d-a7da-4545-8047-36a5c88b66bb\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.376356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-run-httpd\") pod \"b235a23d-a7da-4545-8047-36a5c88b66bb\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.376389 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-combined-ca-bundle\") pod \"b235a23d-a7da-4545-8047-36a5c88b66bb\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.376413 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-scripts\") pod \"b235a23d-a7da-4545-8047-36a5c88b66bb\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.376431 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-config-data\") pod \"b235a23d-a7da-4545-8047-36a5c88b66bb\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.376527 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-sg-core-conf-yaml\") pod \"b235a23d-a7da-4545-8047-36a5c88b66bb\" (UID: \"b235a23d-a7da-4545-8047-36a5c88b66bb\") " Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.377031 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b235a23d-a7da-4545-8047-36a5c88b66bb" (UID: "b235a23d-a7da-4545-8047-36a5c88b66bb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.377248 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b235a23d-a7da-4545-8047-36a5c88b66bb" (UID: "b235a23d-a7da-4545-8047-36a5c88b66bb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.396797 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b235a23d-a7da-4545-8047-36a5c88b66bb-kube-api-access-7cn8s" (OuterVolumeSpecName: "kube-api-access-7cn8s") pod "b235a23d-a7da-4545-8047-36a5c88b66bb" (UID: "b235a23d-a7da-4545-8047-36a5c88b66bb"). InnerVolumeSpecName "kube-api-access-7cn8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.397196 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-scripts" (OuterVolumeSpecName: "scripts") pod "b235a23d-a7da-4545-8047-36a5c88b66bb" (UID: "b235a23d-a7da-4545-8047-36a5c88b66bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.402821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b235a23d-a7da-4545-8047-36a5c88b66bb","Type":"ContainerDied","Data":"9cd10fa3cb05354c0da5c497fcd240730dfbb46486ce43b1e0243cfdaa7149f9"} Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.402866 4832 scope.go:117] "RemoveContainer" containerID="c65b34d94b934b0d27407af16f002e76b75c46b527fd763b26a10bb4ba7605ab" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.402977 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.419074 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0988fee7-998d-4cf1-9740-9ccbdc012168","Type":"ContainerStarted","Data":"0101e80f6ae45a847088c3eff7a247071ad740f23afac5192519c93124901883"} Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.440727 4832 scope.go:117] "RemoveContainer" containerID="5f3bd740415bbd833dfadec8f2fa85f199c0de8f1aa08f389f3fc7cb1a5ece1c" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.459634 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b235a23d-a7da-4545-8047-36a5c88b66bb" (UID: "b235a23d-a7da-4545-8047-36a5c88b66bb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.484132 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.239739551 podStartE2EDuration="11.484110992s" podCreationTimestamp="2026-03-12 15:07:54 +0000 UTC" firstStartedPulling="2026-03-12 15:07:55.682257619 +0000 UTC m=+1234.326271845" lastFinishedPulling="2026-03-12 15:08:04.92662906 +0000 UTC m=+1243.570643286" observedRunningTime="2026-03-12 15:08:05.465102626 +0000 UTC m=+1244.109116862" watchObservedRunningTime="2026-03-12 15:08:05.484110992 +0000 UTC m=+1244.128125218" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.484498 4832 scope.go:117] "RemoveContainer" containerID="e14f155849ccf32d23a85d174aa6ae6ae743f5b69f1f8002159a0126c33000f0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.506052 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cn8s\" (UniqueName: \"kubernetes.io/projected/b235a23d-a7da-4545-8047-36a5c88b66bb-kube-api-access-7cn8s\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.506461 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.506486 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b235a23d-a7da-4545-8047-36a5c88b66bb-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.506497 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.506892 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.526867 4832 scope.go:117] "RemoveContainer" containerID="c0fe9aec06c6585df3e5716ff58fdf792efa19a99db7c42bdd5c820ec5e575cb" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.560722 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-dnh7b"] Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.567662 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b235a23d-a7da-4545-8047-36a5c88b66bb" (UID: "b235a23d-a7da-4545-8047-36a5c88b66bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.609494 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.658109 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-config-data" (OuterVolumeSpecName: "config-data") pod "b235a23d-a7da-4545-8047-36a5c88b66bb" (UID: "b235a23d-a7da-4545-8047-36a5c88b66bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.668511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b9648779f-rg6wj"] Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.712149 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b235a23d-a7da-4545-8047-36a5c88b66bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.741850 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.762308 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.775057 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:05 crc kubenswrapper[4832]: E0312 15:08:05.775659 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="proxy-httpd" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.775681 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="proxy-httpd" Mar 12 15:08:05 crc kubenswrapper[4832]: E0312 15:08:05.775705 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="sg-core" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.775731 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="sg-core" Mar 12 15:08:05 crc kubenswrapper[4832]: E0312 15:08:05.775745 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-notification-agent" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.775755 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-notification-agent" Mar 12 15:08:05 crc kubenswrapper[4832]: E0312 15:08:05.775773 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-central-agent" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.775780 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-central-agent" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.776077 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="sg-core" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.776094 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-central-agent" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.776139 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="proxy-httpd" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.776155 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" containerName="ceilometer-notification-agent" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.778641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.781390 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.781683 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.786231 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.813533 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-log-httpd\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.813580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.813621 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-scripts\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.813643 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-run-httpd\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.813661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2d6\" (UniqueName: \"kubernetes.io/projected/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-kube-api-access-sp2d6\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.813687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-config-data\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.813707 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.831330 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.831986 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-log" containerID="cri-o://5eefbfa7eb669d0a4c862a0ab44eb862143339915cf3bd65f0fcdebf03ea9b81" gracePeriod=30 Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.832312 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-httpd" containerID="cri-o://ecce77dfc2c2cf511b6ae3e4dbd8a0809b952ff46a2082dc4deebb736927564a" gracePeriod=30 Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.918479 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-log-httpd\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.918578 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.918619 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-scripts\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.918639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-run-httpd\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.918658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp2d6\" (UniqueName: \"kubernetes.io/projected/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-kube-api-access-sp2d6\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.918691 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-config-data\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.918709 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.919628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-log-httpd\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.921170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-run-httpd\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.928556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.929214 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-scripts\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.929385 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-config-data\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.931349 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:05 crc kubenswrapper[4832]: I0312 15:08:05.940581 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp2d6\" (UniqueName: \"kubernetes.io/projected/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-kube-api-access-sp2d6\") pod \"ceilometer-0\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " pod="openstack/ceilometer-0" Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.106040 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.428792 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b9648779f-rg6wj" event={"ID":"969337ac-7543-4b59-820e-61408d5af0c3","Type":"ContainerStarted","Data":"d2e841fb3c06c08c6f2b7eb2d05e0d2176303b64e42a63f110fc0d69ec713b30"} Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.429144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b9648779f-rg6wj" event={"ID":"969337ac-7543-4b59-820e-61408d5af0c3","Type":"ContainerStarted","Data":"6470212a095a19fc18dc4526f5b7bd0bd36621183e8c6a9e7c33bd92580bf856"} Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.429159 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b9648779f-rg6wj" event={"ID":"969337ac-7543-4b59-820e-61408d5af0c3","Type":"ContainerStarted","Data":"dfa497a1c5e78b56ac62498394420f249dbde07d4812c33b70504c35a0f78016"} Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.429212 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.429234 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.432726 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555468-dnh7b" event={"ID":"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf","Type":"ContainerStarted","Data":"5232ad213234f5dc97ad3f760be1acb5bd91d1d3630eb48adb64dea6ccfe12d0"} Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.439525 4832 generic.go:334] "Generic (PLEG): container finished" podID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerID="5eefbfa7eb669d0a4c862a0ab44eb862143339915cf3bd65f0fcdebf03ea9b81" exitCode=143 Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.439599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0e23234-0687-46a1-8c0f-823c92c0aebc","Type":"ContainerDied","Data":"5eefbfa7eb669d0a4c862a0ab44eb862143339915cf3bd65f0fcdebf03ea9b81"} Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.462573 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5b9648779f-rg6wj" podStartSLOduration=6.462548571 podStartE2EDuration="6.462548571s" podCreationTimestamp="2026-03-12 15:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:08:06.456440796 +0000 UTC m=+1245.100455042" watchObservedRunningTime="2026-03-12 15:08:06.462548571 +0000 UTC m=+1245.106562797" Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.582754 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:06 crc kubenswrapper[4832]: W0312 15:08:06.588372 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod129f8e7f_72ac_4e3d_87f0_0c0c43061fbd.slice/crio-8c5c1255aeb831f5cdfc7d280d87103ebb614b6fa7adbacabaffe18e1e745c50 WatchSource:0}: Error finding container 8c5c1255aeb831f5cdfc7d280d87103ebb614b6fa7adbacabaffe18e1e745c50: Status 404 returned error can't find the container with id 8c5c1255aeb831f5cdfc7d280d87103ebb614b6fa7adbacabaffe18e1e745c50 Mar 12 15:08:06 crc kubenswrapper[4832]: I0312 15:08:06.629627 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b235a23d-a7da-4545-8047-36a5c88b66bb" path="/var/lib/kubelet/pods/b235a23d-a7da-4545-8047-36a5c88b66bb/volumes" Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.148351 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.148905 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-log" containerID="cri-o://157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0" gracePeriod=30 Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.149003 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-httpd" containerID="cri-o://a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044" gracePeriod=30 Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.473867 4832 generic.go:334] "Generic (PLEG): container finished" podID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerID="157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0" exitCode=143 Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.473969 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ebdaa9-c995-4936-9476-d996aa2532a9","Type":"ContainerDied","Data":"157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0"} Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.483820 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerStarted","Data":"8c5c1255aeb831f5cdfc7d280d87103ebb614b6fa7adbacabaffe18e1e745c50"} Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.493936 4832 generic.go:334] "Generic (PLEG): container finished" podID="fa03d1b7-d97e-4806-86c9-8f77ce37f5bf" containerID="018dc5e56868d28a9fc5e0b53ebcad64eaf41e9c829b322f8f441040c6748a4f" exitCode=0 Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.495196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555468-dnh7b" event={"ID":"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf","Type":"ContainerDied","Data":"018dc5e56868d28a9fc5e0b53ebcad64eaf41e9c829b322f8f441040c6748a4f"} Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.912499 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.943934 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:08:07 crc kubenswrapper[4832]: I0312 15:08:07.944369 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9f816534-91fd-42d6-8193-85a77ad3490c" containerName="kube-state-metrics" containerID="cri-o://67aea7477ebc6236ef618acd800acf46003a2be553aa9445c5a4bf771ea4982c" gracePeriod=30 Mar 12 15:08:08 crc kubenswrapper[4832]: I0312 15:08:08.507102 4832 generic.go:334] "Generic (PLEG): container finished" podID="9f816534-91fd-42d6-8193-85a77ad3490c" containerID="67aea7477ebc6236ef618acd800acf46003a2be553aa9445c5a4bf771ea4982c" exitCode=2 Mar 12 15:08:08 crc kubenswrapper[4832]: I0312 15:08:08.507201 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f816534-91fd-42d6-8193-85a77ad3490c","Type":"ContainerDied","Data":"67aea7477ebc6236ef618acd800acf46003a2be553aa9445c5a4bf771ea4982c"} Mar 12 15:08:08 crc kubenswrapper[4832]: I0312 15:08:08.511600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerStarted","Data":"ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b"} Mar 12 15:08:08 crc kubenswrapper[4832]: I0312 15:08:08.929826 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-dnh7b" Mar 12 15:08:08 crc kubenswrapper[4832]: I0312 15:08:08.992745 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w767t\" (UniqueName: \"kubernetes.io/projected/fa03d1b7-d97e-4806-86c9-8f77ce37f5bf-kube-api-access-w767t\") pod \"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf\" (UID: \"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf\") " Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.002552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa03d1b7-d97e-4806-86c9-8f77ce37f5bf-kube-api-access-w767t" (OuterVolumeSpecName: "kube-api-access-w767t") pod "fa03d1b7-d97e-4806-86c9-8f77ce37f5bf" (UID: "fa03d1b7-d97e-4806-86c9-8f77ce37f5bf"). InnerVolumeSpecName "kube-api-access-w767t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.094211 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w767t\" (UniqueName: \"kubernetes.io/projected/fa03d1b7-d97e-4806-86c9-8f77ce37f5bf-kube-api-access-w767t\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.185259 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.195899 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfkz2\" (UniqueName: \"kubernetes.io/projected/9f816534-91fd-42d6-8193-85a77ad3490c-kube-api-access-kfkz2\") pod \"9f816534-91fd-42d6-8193-85a77ad3490c\" (UID: \"9f816534-91fd-42d6-8193-85a77ad3490c\") " Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.204828 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f816534-91fd-42d6-8193-85a77ad3490c-kube-api-access-kfkz2" (OuterVolumeSpecName: "kube-api-access-kfkz2") pod "9f816534-91fd-42d6-8193-85a77ad3490c" (UID: "9f816534-91fd-42d6-8193-85a77ad3490c"). InnerVolumeSpecName "kube-api-access-kfkz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.299271 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfkz2\" (UniqueName: \"kubernetes.io/projected/9f816534-91fd-42d6-8193-85a77ad3490c-kube-api-access-kfkz2\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.528152 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-dnh7b" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.528163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555468-dnh7b" event={"ID":"fa03d1b7-d97e-4806-86c9-8f77ce37f5bf","Type":"ContainerDied","Data":"5232ad213234f5dc97ad3f760be1acb5bd91d1d3630eb48adb64dea6ccfe12d0"} Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.528793 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5232ad213234f5dc97ad3f760be1acb5bd91d1d3630eb48adb64dea6ccfe12d0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.529888 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-745cdbf99b-kdz5c" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.529963 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.534262 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f816534-91fd-42d6-8193-85a77ad3490c","Type":"ContainerDied","Data":"7d00d2b604032c4b7ca45f8ebc1091f95ddf1b418cc591e74a5a12226d1727ff"} Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.534307 4832 scope.go:117] "RemoveContainer" containerID="67aea7477ebc6236ef618acd800acf46003a2be553aa9445c5a4bf771ea4982c" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.534470 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.549590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerStarted","Data":"cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f"} Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.555587 4832 generic.go:334] "Generic (PLEG): container finished" podID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerID="ecce77dfc2c2cf511b6ae3e4dbd8a0809b952ff46a2082dc4deebb736927564a" exitCode=0 Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.555627 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0e23234-0687-46a1-8c0f-823c92c0aebc","Type":"ContainerDied","Data":"ecce77dfc2c2cf511b6ae3e4dbd8a0809b952ff46a2082dc4deebb736927564a"} Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.585749 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.592967 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.603638 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:08:09 crc kubenswrapper[4832]: E0312 15:08:09.604101 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03d1b7-d97e-4806-86c9-8f77ce37f5bf" containerName="oc" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.604120 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03d1b7-d97e-4806-86c9-8f77ce37f5bf" containerName="oc" Mar 12 15:08:09 crc kubenswrapper[4832]: E0312 15:08:09.604168 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f816534-91fd-42d6-8193-85a77ad3490c" containerName="kube-state-metrics" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.604176 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f816534-91fd-42d6-8193-85a77ad3490c" containerName="kube-state-metrics" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.604411 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f816534-91fd-42d6-8193-85a77ad3490c" containerName="kube-state-metrics" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.604431 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa03d1b7-d97e-4806-86c9-8f77ce37f5bf" containerName="oc" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.605173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.607742 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.608702 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.612145 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.707372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.707691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.707788 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.707843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngd67\" (UniqueName: \"kubernetes.io/projected/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-api-access-ngd67\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.809450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngd67\" (UniqueName: \"kubernetes.io/projected/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-api-access-ngd67\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.809641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.809718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.809857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.814149 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.814195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.814230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64122d3d-d2ec-49ad-a01e-1497d5889af6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.828919 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngd67\" (UniqueName: \"kubernetes.io/projected/64122d3d-d2ec-49ad-a01e-1497d5889af6-kube-api-access-ngd67\") pod \"kube-state-metrics-0\" (UID: \"64122d3d-d2ec-49ad-a01e-1497d5889af6\") " pod="openstack/kube-state-metrics-0" Mar 12 15:08:09 crc kubenswrapper[4832]: I0312 15:08:09.997936 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-wmmjh"] Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.004330 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-wmmjh"] Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.589191 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0e23234-0687-46a1-8c0f-823c92c0aebc","Type":"ContainerDied","Data":"bc6e741fc34569080e14216765942a84331519123480684933d69d999c6c8e3d"} Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.589239 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6e741fc34569080e14216765942a84331519123480684933d69d999c6c8e3d" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.638959 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f816534-91fd-42d6-8193-85a77ad3490c" path="/var/lib/kubelet/pods/9f816534-91fd-42d6-8193-85a77ad3490c/volumes" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.640085 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40d15e5-51c1-4be3-b8c1-6b92290aa59d" path="/var/lib/kubelet/pods/b40d15e5-51c1-4be3-b8c1-6b92290aa59d/volumes" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.705597 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.710133 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733308 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-config-data\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733385 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-public-tls-certs\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-combined-ca-bundle\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733480 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-logs\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733611 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-scripts\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733660 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4bqr\" (UniqueName: \"kubernetes.io/projected/d0e23234-0687-46a1-8c0f-823c92c0aebc-kube-api-access-j4bqr\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733686 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.733723 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-httpd-run\") pod \"d0e23234-0687-46a1-8c0f-823c92c0aebc\" (UID: \"d0e23234-0687-46a1-8c0f-823c92c0aebc\") " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.741857 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-logs" (OuterVolumeSpecName: "logs") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.752891 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.767135 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e23234-0687-46a1-8c0f-823c92c0aebc-kube-api-access-j4bqr" (OuterVolumeSpecName: "kube-api-access-j4bqr") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "kube-api-access-j4bqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.772701 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.789612 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-scripts" (OuterVolumeSpecName: "scripts") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.840093 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.840313 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4bqr\" (UniqueName: \"kubernetes.io/projected/d0e23234-0687-46a1-8c0f-823c92c0aebc-kube-api-access-j4bqr\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.840339 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.840349 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.840358 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e23234-0687-46a1-8c0f-823c92c0aebc-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.883822 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.897256 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-config-data" (OuterVolumeSpecName: "config-data") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.898451 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.917732 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d0e23234-0687-46a1-8c0f-823c92c0aebc" (UID: "d0e23234-0687-46a1-8c0f-823c92c0aebc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.946123 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.946155 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.946167 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e23234-0687-46a1-8c0f-823c92c0aebc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.946179 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:10 crc kubenswrapper[4832]: I0312 15:08:10.994054 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.046741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-logs\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.046843 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l42s\" (UniqueName: \"kubernetes.io/projected/81ebdaa9-c995-4936-9476-d996aa2532a9-kube-api-access-4l42s\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.046875 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-internal-tls-certs\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.046895 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.046954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-combined-ca-bundle\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.046998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-scripts\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.047025 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-config-data\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.047063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-httpd-run\") pod \"81ebdaa9-c995-4936-9476-d996aa2532a9\" (UID: \"81ebdaa9-c995-4936-9476-d996aa2532a9\") " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.047363 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-logs" (OuterVolumeSpecName: "logs") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.047737 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.053615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ebdaa9-c995-4936-9476-d996aa2532a9-kube-api-access-4l42s" (OuterVolumeSpecName: "kube-api-access-4l42s") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "kube-api-access-4l42s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.053612 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.053699 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-scripts" (OuterVolumeSpecName: "scripts") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.090048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.122349 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-config-data" (OuterVolumeSpecName: "config-data") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.122724 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "81ebdaa9-c995-4936-9476-d996aa2532a9" (UID: "81ebdaa9-c995-4936-9476-d996aa2532a9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149422 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l42s\" (UniqueName: \"kubernetes.io/projected/81ebdaa9-c995-4936-9476-d996aa2532a9-kube-api-access-4l42s\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149460 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149486 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149496 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149518 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149528 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ebdaa9-c995-4936-9476-d996aa2532a9-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149536 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.149544 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ebdaa9-c995-4936-9476-d996aa2532a9-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.171992 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.250932 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.271380 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.392008 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c8f87d6b5-dbffr" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.463333 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8ccdd85bd-b4bf5"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.463671 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8ccdd85bd-b4bf5" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-api" containerID="cri-o://81a8742ae704c0476e70eb876bdb98687dad42e72b8c6dcb42fdaf3a7dab5eb3" gracePeriod=30 Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.463793 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8ccdd85bd-b4bf5" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-httpd" containerID="cri-o://7c65c318197a5e7c198b9634dd571c80a048fca8dcedb43f84b073eee536eef5" gracePeriod=30 Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.601228 4832 generic.go:334] "Generic (PLEG): container finished" podID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerID="a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044" exitCode=0 Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.601308 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ebdaa9-c995-4936-9476-d996aa2532a9","Type":"ContainerDied","Data":"a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044"} Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.601339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ebdaa9-c995-4936-9476-d996aa2532a9","Type":"ContainerDied","Data":"2123a69629ca06fe4266fdbeb7fc66db7a84fdd7e7467e762db9b29ecbcb9079"} Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.601358 4832 scope.go:117] "RemoveContainer" containerID="a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.601499 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.607768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerStarted","Data":"0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624"} Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.609952 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"64122d3d-d2ec-49ad-a01e-1497d5889af6","Type":"ContainerStarted","Data":"15dff73611f3d31cac885a3c67166adf61a74f7f4d7751c604fe3e21f17534e8"} Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.613935 4832 generic.go:334] "Generic (PLEG): container finished" podID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerID="7c65c318197a5e7c198b9634dd571c80a048fca8dcedb43f84b073eee536eef5" exitCode=0 Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.614055 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.614520 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ccdd85bd-b4bf5" event={"ID":"97e7d696-8bfc-49ec-ae8c-6061b9813d16","Type":"ContainerDied","Data":"7c65c318197a5e7c198b9634dd571c80a048fca8dcedb43f84b073eee536eef5"} Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.646455 4832 scope.go:117] "RemoveContainer" containerID="157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.801925 4832 scope.go:117] "RemoveContainer" containerID="a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044" Mar 12 15:08:11 crc kubenswrapper[4832]: E0312 15:08:11.806108 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044\": container with ID starting with a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044 not found: ID does not exist" containerID="a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.806156 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044"} err="failed to get container status \"a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044\": rpc error: code = NotFound desc = could not find container \"a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044\": container with ID starting with a464003e073cc874b041e4e9bfc7257abfc4af61e117202bd3aea2e741d5d044 not found: ID does not exist" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.806185 4832 scope.go:117] "RemoveContainer" containerID="157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0" Mar 12 15:08:11 crc kubenswrapper[4832]: E0312 15:08:11.806617 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0\": container with ID starting with 157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0 not found: ID does not exist" containerID="157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.806675 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0"} err="failed to get container status \"157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0\": rpc error: code = NotFound desc = could not find container \"157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0\": container with ID starting with 157ed18a133546ce4428cb65e18546ee06d717fa470de24c726e276b19404ce0 not found: ID does not exist" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.827793 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.836116 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.850514 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.887633 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.906840 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: E0312 15:08:11.907218 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-log" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907245 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-log" Mar 12 15:08:11 crc kubenswrapper[4832]: E0312 15:08:11.907264 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-httpd" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907272 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-httpd" Mar 12 15:08:11 crc kubenswrapper[4832]: E0312 15:08:11.907288 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-httpd" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907294 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-httpd" Mar 12 15:08:11 crc kubenswrapper[4832]: E0312 15:08:11.907317 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-log" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907324 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-log" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907545 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-log" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907571 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" containerName="glance-httpd" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907585 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-httpd" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.907601 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" containerName="glance-log" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.908689 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.911540 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.911629 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65cqt" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.911886 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.912146 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.931166 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.933995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.951136 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.951295 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.952261 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:08:11 crc kubenswrapper[4832]: I0312 15:08:11.961274 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.074521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa341ea-d6ca-4afd-b425-5197402c2ff8-logs\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.074586 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa341ea-d6ca-4afd-b425-5197402c2ff8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.074615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4ptg\" (UniqueName: \"kubernetes.io/projected/0aa341ea-d6ca-4afd-b425-5197402c2ff8-kube-api-access-s4ptg\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.074655 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2df\" (UniqueName: \"kubernetes.io/projected/80295e6a-6a0d-4cb0-868d-684a2631b1eb-kube-api-access-fp2df\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.074813 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80295e6a-6a0d-4cb0-868d-684a2631b1eb-logs\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.074961 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-scripts\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075105 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-config-data\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80295e6a-6a0d-4cb0-868d-684a2631b1eb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075370 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.075461 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.176790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa341ea-d6ca-4afd-b425-5197402c2ff8-logs\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.176863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa341ea-d6ca-4afd-b425-5197402c2ff8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.176890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4ptg\" (UniqueName: \"kubernetes.io/projected/0aa341ea-d6ca-4afd-b425-5197402c2ff8-kube-api-access-s4ptg\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.176907 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2df\" (UniqueName: \"kubernetes.io/projected/80295e6a-6a0d-4cb0-868d-684a2631b1eb-kube-api-access-fp2df\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.176932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80295e6a-6a0d-4cb0-868d-684a2631b1eb-logs\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.176972 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-scripts\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.176997 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177019 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-config-data\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177094 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80295e6a-6a0d-4cb0-868d-684a2631b1eb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177163 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177221 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa341ea-d6ca-4afd-b425-5197402c2ff8-logs\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.177988 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.178021 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.180037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa341ea-d6ca-4afd-b425-5197402c2ff8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.181862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.183759 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-scripts\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.184546 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.185596 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80295e6a-6a0d-4cb0-868d-684a2631b1eb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.188362 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80295e6a-6a0d-4cb0-868d-684a2631b1eb-logs\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.188838 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.189000 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.189019 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.189391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa341ea-d6ca-4afd-b425-5197402c2ff8-config-data\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.190345 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80295e6a-6a0d-4cb0-868d-684a2631b1eb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.198238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2df\" (UniqueName: \"kubernetes.io/projected/80295e6a-6a0d-4cb0-868d-684a2631b1eb-kube-api-access-fp2df\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.204188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4ptg\" (UniqueName: \"kubernetes.io/projected/0aa341ea-d6ca-4afd-b425-5197402c2ff8-kube-api-access-s4ptg\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.220398 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"80295e6a-6a0d-4cb0-868d-684a2631b1eb\") " pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.220827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0aa341ea-d6ca-4afd-b425-5197402c2ff8\") " pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.237299 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.273251 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.635413 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ebdaa9-c995-4936-9476-d996aa2532a9" path="/var/lib/kubelet/pods/81ebdaa9-c995-4936-9476-d996aa2532a9/volumes" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.637635 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e23234-0687-46a1-8c0f-823c92c0aebc" path="/var/lib/kubelet/pods/d0e23234-0687-46a1-8c0f-823c92c0aebc/volumes" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.638518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"64122d3d-d2ec-49ad-a01e-1497d5889af6","Type":"ContainerStarted","Data":"2549bf6b52b0bf91e733f92a18b3ea43ce0e8b260bdd91b431ee60eaf9448390"} Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.712919 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.3582418560000002 podStartE2EDuration="3.712902158s" podCreationTimestamp="2026-03-12 15:08:09 +0000 UTC" firstStartedPulling="2026-03-12 15:08:11.27554468 +0000 UTC m=+1249.919558906" lastFinishedPulling="2026-03-12 15:08:11.630204982 +0000 UTC m=+1250.274219208" observedRunningTime="2026-03-12 15:08:12.694337834 +0000 UTC m=+1251.338352060" watchObservedRunningTime="2026-03-12 15:08:12.712902158 +0000 UTC m=+1251.356916374" Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.770950 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 15:08:12 crc kubenswrapper[4832]: W0312 15:08:12.795033 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aa341ea_d6ca_4afd_b425_5197402c2ff8.slice/crio-e0ff69952e23c1c3e3e4497713fc65d4890d042e70f8217269792dbe8375a431 WatchSource:0}: Error finding container e0ff69952e23c1c3e3e4497713fc65d4890d042e70f8217269792dbe8375a431: Status 404 returned error can't find the container with id e0ff69952e23c1c3e3e4497713fc65d4890d042e70f8217269792dbe8375a431 Mar 12 15:08:12 crc kubenswrapper[4832]: I0312 15:08:12.907264 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.643720 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerStarted","Data":"613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c"} Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.644236 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="proxy-httpd" containerID="cri-o://613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c" gracePeriod=30 Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.644250 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.643842 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-central-agent" containerID="cri-o://ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b" gracePeriod=30 Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.644338 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-notification-agent" containerID="cri-o://cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f" gracePeriod=30 Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.644358 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="sg-core" containerID="cri-o://0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624" gracePeriod=30 Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.656219 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0aa341ea-d6ca-4afd-b425-5197402c2ff8","Type":"ContainerStarted","Data":"515149a0822144a5b7952cdf46da312ca56991dec0a76aec1a7f4ef7f58866bf"} Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.656292 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0aa341ea-d6ca-4afd-b425-5197402c2ff8","Type":"ContainerStarted","Data":"e0ff69952e23c1c3e3e4497713fc65d4890d042e70f8217269792dbe8375a431"} Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.659569 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80295e6a-6a0d-4cb0-868d-684a2631b1eb","Type":"ContainerStarted","Data":"7c0e91f2c19ecb3df5b76098e7bea276fb1af42833a6c503a2274ab9c3b9b284"} Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.659599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80295e6a-6a0d-4cb0-868d-684a2631b1eb","Type":"ContainerStarted","Data":"8bb7bddb4dbd621f2014dbbc5bb2413d06992f4e12130f8000b9c407c2be8e36"} Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.659612 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 15:08:13 crc kubenswrapper[4832]: I0312 15:08:13.667975 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.735308579 podStartE2EDuration="8.667955905s" podCreationTimestamp="2026-03-12 15:08:05 +0000 UTC" firstStartedPulling="2026-03-12 15:08:06.591631941 +0000 UTC m=+1245.235646177" lastFinishedPulling="2026-03-12 15:08:12.524279277 +0000 UTC m=+1251.168293503" observedRunningTime="2026-03-12 15:08:13.665844955 +0000 UTC m=+1252.309859211" watchObservedRunningTime="2026-03-12 15:08:13.667955905 +0000 UTC m=+1252.311970131" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.206211 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.320216 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-scripts\") pod \"7b32181c-0268-4e3e-8b7b-f2811720ce58\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.320395 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-combined-ca-bundle\") pod \"7b32181c-0268-4e3e-8b7b-f2811720ce58\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.320539 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b32181c-0268-4e3e-8b7b-f2811720ce58-logs\") pod \"7b32181c-0268-4e3e-8b7b-f2811720ce58\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.320608 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-config-data\") pod \"7b32181c-0268-4e3e-8b7b-f2811720ce58\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.320657 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-tls-certs\") pod \"7b32181c-0268-4e3e-8b7b-f2811720ce58\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.320740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh5b5\" (UniqueName: \"kubernetes.io/projected/7b32181c-0268-4e3e-8b7b-f2811720ce58-kube-api-access-nh5b5\") pod \"7b32181c-0268-4e3e-8b7b-f2811720ce58\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.320808 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-secret-key\") pod \"7b32181c-0268-4e3e-8b7b-f2811720ce58\" (UID: \"7b32181c-0268-4e3e-8b7b-f2811720ce58\") " Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.321373 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b32181c-0268-4e3e-8b7b-f2811720ce58-logs" (OuterVolumeSpecName: "logs") pod "7b32181c-0268-4e3e-8b7b-f2811720ce58" (UID: "7b32181c-0268-4e3e-8b7b-f2811720ce58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.322108 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b32181c-0268-4e3e-8b7b-f2811720ce58-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.337805 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7b32181c-0268-4e3e-8b7b-f2811720ce58" (UID: "7b32181c-0268-4e3e-8b7b-f2811720ce58"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.344558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b32181c-0268-4e3e-8b7b-f2811720ce58-kube-api-access-nh5b5" (OuterVolumeSpecName: "kube-api-access-nh5b5") pod "7b32181c-0268-4e3e-8b7b-f2811720ce58" (UID: "7b32181c-0268-4e3e-8b7b-f2811720ce58"). InnerVolumeSpecName "kube-api-access-nh5b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.349448 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-config-data" (OuterVolumeSpecName: "config-data") pod "7b32181c-0268-4e3e-8b7b-f2811720ce58" (UID: "7b32181c-0268-4e3e-8b7b-f2811720ce58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.353890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-scripts" (OuterVolumeSpecName: "scripts") pod "7b32181c-0268-4e3e-8b7b-f2811720ce58" (UID: "7b32181c-0268-4e3e-8b7b-f2811720ce58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.353908 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b32181c-0268-4e3e-8b7b-f2811720ce58" (UID: "7b32181c-0268-4e3e-8b7b-f2811720ce58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.398703 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7b32181c-0268-4e3e-8b7b-f2811720ce58" (UID: "7b32181c-0268-4e3e-8b7b-f2811720ce58"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.424118 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.424153 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.424162 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.424173 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh5b5\" (UniqueName: \"kubernetes.io/projected/7b32181c-0268-4e3e-8b7b-f2811720ce58-kube-api-access-nh5b5\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.424185 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b32181c-0268-4e3e-8b7b-f2811720ce58-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.424195 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b32181c-0268-4e3e-8b7b-f2811720ce58-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.673753 4832 generic.go:334] "Generic (PLEG): container finished" podID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerID="613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c" exitCode=0 Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.673796 4832 generic.go:334] "Generic (PLEG): container finished" podID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerID="0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624" exitCode=2 Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.673806 4832 generic.go:334] "Generic (PLEG): container finished" podID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerID="cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f" exitCode=0 Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.673886 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerDied","Data":"613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c"} Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.673942 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerDied","Data":"0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624"} Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.673967 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerDied","Data":"cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f"} Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.677450 4832 generic.go:334] "Generic (PLEG): container finished" podID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerID="7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007" exitCode=137 Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.677583 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-745cdbf99b-kdz5c" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.677608 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-745cdbf99b-kdz5c" event={"ID":"7b32181c-0268-4e3e-8b7b-f2811720ce58","Type":"ContainerDied","Data":"7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007"} Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.677643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-745cdbf99b-kdz5c" event={"ID":"7b32181c-0268-4e3e-8b7b-f2811720ce58","Type":"ContainerDied","Data":"c177d8f12dd5cfecd3b8f7a7a009cb40de2aa436f2b0d0856655efe6c17bace5"} Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.677663 4832 scope.go:117] "RemoveContainer" containerID="ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.680353 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0aa341ea-d6ca-4afd-b425-5197402c2ff8","Type":"ContainerStarted","Data":"aa4cee8cf00ba58cc81f6ac2497758300fcf7911575c3729dee1bfb16c8390d4"} Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.688368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80295e6a-6a0d-4cb0-868d-684a2631b1eb","Type":"ContainerStarted","Data":"700369e8e65e1357542f56627cd4437148522abad5e3a7711ad36ecfd4648f43"} Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.732336 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.732303193 podStartE2EDuration="3.732303193s" podCreationTimestamp="2026-03-12 15:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:08:14.702682541 +0000 UTC m=+1253.346696767" watchObservedRunningTime="2026-03-12 15:08:14.732303193 +0000 UTC m=+1253.376317489" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.750034 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.750016302 podStartE2EDuration="3.750016302s" podCreationTimestamp="2026-03-12 15:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:08:14.721587255 +0000 UTC m=+1253.365601481" watchObservedRunningTime="2026-03-12 15:08:14.750016302 +0000 UTC m=+1253.394030528" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.761859 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-745cdbf99b-kdz5c"] Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.768916 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-745cdbf99b-kdz5c"] Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.922836 4832 scope.go:117] "RemoveContainer" containerID="7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.949551 4832 scope.go:117] "RemoveContainer" containerID="ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf" Mar 12 15:08:14 crc kubenswrapper[4832]: E0312 15:08:14.950128 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf\": container with ID starting with ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf not found: ID does not exist" containerID="ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.950179 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf"} err="failed to get container status \"ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf\": rpc error: code = NotFound desc = could not find container \"ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf\": container with ID starting with ff9a699aa277b6a5323dd5d39dc4caf402a7524bc23fdae63427ae7fa7ee13cf not found: ID does not exist" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.950235 4832 scope.go:117] "RemoveContainer" containerID="7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007" Mar 12 15:08:14 crc kubenswrapper[4832]: E0312 15:08:14.950897 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007\": container with ID starting with 7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007 not found: ID does not exist" containerID="7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007" Mar 12 15:08:14 crc kubenswrapper[4832]: I0312 15:08:14.950944 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007"} err="failed to get container status \"7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007\": rpc error: code = NotFound desc = could not find container \"7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007\": container with ID starting with 7d90420d7c4978b59e4c942fbeed59f9ab03f090713025b604574ee43351e007 not found: ID does not exist" Mar 12 15:08:15 crc kubenswrapper[4832]: I0312 15:08:15.398669 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:15 crc kubenswrapper[4832]: I0312 15:08:15.404238 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b9648779f-rg6wj" Mar 12 15:08:15 crc kubenswrapper[4832]: I0312 15:08:15.718490 4832 generic.go:334] "Generic (PLEG): container finished" podID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerID="81a8742ae704c0476e70eb876bdb98687dad42e72b8c6dcb42fdaf3a7dab5eb3" exitCode=0 Mar 12 15:08:15 crc kubenswrapper[4832]: I0312 15:08:15.718633 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ccdd85bd-b4bf5" event={"ID":"97e7d696-8bfc-49ec-ae8c-6061b9813d16","Type":"ContainerDied","Data":"81a8742ae704c0476e70eb876bdb98687dad42e72b8c6dcb42fdaf3a7dab5eb3"} Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.131113 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.260315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-config\") pod \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.260404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-ovndb-tls-certs\") pod \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.260447 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-httpd-config\") pod \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.260571 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-combined-ca-bundle\") pod \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.260631 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxp8w\" (UniqueName: \"kubernetes.io/projected/97e7d696-8bfc-49ec-ae8c-6061b9813d16-kube-api-access-wxp8w\") pod \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\" (UID: \"97e7d696-8bfc-49ec-ae8c-6061b9813d16\") " Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.267496 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "97e7d696-8bfc-49ec-ae8c-6061b9813d16" (UID: "97e7d696-8bfc-49ec-ae8c-6061b9813d16"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.267898 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e7d696-8bfc-49ec-ae8c-6061b9813d16-kube-api-access-wxp8w" (OuterVolumeSpecName: "kube-api-access-wxp8w") pod "97e7d696-8bfc-49ec-ae8c-6061b9813d16" (UID: "97e7d696-8bfc-49ec-ae8c-6061b9813d16"). InnerVolumeSpecName "kube-api-access-wxp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.310761 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97e7d696-8bfc-49ec-ae8c-6061b9813d16" (UID: "97e7d696-8bfc-49ec-ae8c-6061b9813d16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.326636 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-config" (OuterVolumeSpecName: "config") pod "97e7d696-8bfc-49ec-ae8c-6061b9813d16" (UID: "97e7d696-8bfc-49ec-ae8c-6061b9813d16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.347613 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "97e7d696-8bfc-49ec-ae8c-6061b9813d16" (UID: "97e7d696-8bfc-49ec-ae8c-6061b9813d16"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.362740 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.362768 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.362779 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.362787 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e7d696-8bfc-49ec-ae8c-6061b9813d16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.362796 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxp8w\" (UniqueName: \"kubernetes.io/projected/97e7d696-8bfc-49ec-ae8c-6061b9813d16-kube-api-access-wxp8w\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.633892 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" path="/var/lib/kubelet/pods/7b32181c-0268-4e3e-8b7b-f2811720ce58/volumes" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.728682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ccdd85bd-b4bf5" event={"ID":"97e7d696-8bfc-49ec-ae8c-6061b9813d16","Type":"ContainerDied","Data":"ccb417a2b2ea575668c3b6157b67b194b2f6e11776a6cc0974aa342d5eb24461"} Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.728758 4832 scope.go:117] "RemoveContainer" containerID="7c65c318197a5e7c198b9634dd571c80a048fca8dcedb43f84b073eee536eef5" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.729237 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ccdd85bd-b4bf5" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.760052 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8ccdd85bd-b4bf5"] Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.769292 4832 scope.go:117] "RemoveContainer" containerID="81a8742ae704c0476e70eb876bdb98687dad42e72b8c6dcb42fdaf3a7dab5eb3" Mar 12 15:08:16 crc kubenswrapper[4832]: I0312 15:08:16.775152 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8ccdd85bd-b4bf5"] Mar 12 15:08:18 crc kubenswrapper[4832]: I0312 15:08:18.645053 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" path="/var/lib/kubelet/pods/97e7d696-8bfc-49ec-ae8c-6061b9813d16/volumes" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.652733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.725859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-config-data\") pod \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.725955 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-combined-ca-bundle\") pod \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.726010 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-log-httpd\") pod \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.726072 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-sg-core-conf-yaml\") pod \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.726105 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp2d6\" (UniqueName: \"kubernetes.io/projected/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-kube-api-access-sp2d6\") pod \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.726196 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-scripts\") pod \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.726279 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-run-httpd\") pod \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\" (UID: \"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd\") " Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.727220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" (UID: "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.727833 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" (UID: "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.733461 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-kube-api-access-sp2d6" (OuterVolumeSpecName: "kube-api-access-sp2d6") pod "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" (UID: "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd"). InnerVolumeSpecName "kube-api-access-sp2d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.733790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-scripts" (OuterVolumeSpecName: "scripts") pod "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" (UID: "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.757901 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" (UID: "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.772628 4832 generic.go:334] "Generic (PLEG): container finished" podID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerID="ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b" exitCode=0 Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.772671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerDied","Data":"ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b"} Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.772700 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129f8e7f-72ac-4e3d-87f0-0c0c43061fbd","Type":"ContainerDied","Data":"8c5c1255aeb831f5cdfc7d280d87103ebb614b6fa7adbacabaffe18e1e745c50"} Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.772719 4832 scope.go:117] "RemoveContainer" containerID="613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.772835 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.828974 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.829024 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp2d6\" (UniqueName: \"kubernetes.io/projected/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-kube-api-access-sp2d6\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.829043 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.829055 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.829066 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.837524 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" (UID: "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.844287 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-config-data" (OuterVolumeSpecName: "config-data") pod "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" (UID: "129f8e7f-72ac-4e3d-87f0-0c0c43061fbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.881782 4832 scope.go:117] "RemoveContainer" containerID="0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.899729 4832 scope.go:117] "RemoveContainer" containerID="cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.917788 4832 scope.go:117] "RemoveContainer" containerID="ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.930566 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.930591 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.933341 4832 scope.go:117] "RemoveContainer" containerID="613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c" Mar 12 15:08:19 crc kubenswrapper[4832]: E0312 15:08:19.933751 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c\": container with ID starting with 613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c not found: ID does not exist" containerID="613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.933779 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c"} err="failed to get container status \"613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c\": rpc error: code = NotFound desc = could not find container \"613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c\": container with ID starting with 613610414f4956282f35997e9019cc2983294a30912359f4a46d8f374b2b258c not found: ID does not exist" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.933800 4832 scope.go:117] "RemoveContainer" containerID="0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624" Mar 12 15:08:19 crc kubenswrapper[4832]: E0312 15:08:19.934097 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624\": container with ID starting with 0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624 not found: ID does not exist" containerID="0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.934136 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624"} err="failed to get container status \"0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624\": rpc error: code = NotFound desc = could not find container \"0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624\": container with ID starting with 0d5ff35b04d8a95e78f7b062e6dee2fc871babf35a3500cdbcbbe61b15027624 not found: ID does not exist" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.934162 4832 scope.go:117] "RemoveContainer" containerID="cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f" Mar 12 15:08:19 crc kubenswrapper[4832]: E0312 15:08:19.934394 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f\": container with ID starting with cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f not found: ID does not exist" containerID="cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.934420 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f"} err="failed to get container status \"cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f\": rpc error: code = NotFound desc = could not find container \"cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f\": container with ID starting with cc957eb9366269a201d7b61425da3d33afe43e995495a92ee322393a3f38782f not found: ID does not exist" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.934433 4832 scope.go:117] "RemoveContainer" containerID="ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b" Mar 12 15:08:19 crc kubenswrapper[4832]: E0312 15:08:19.934720 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b\": container with ID starting with ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b not found: ID does not exist" containerID="ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b" Mar 12 15:08:19 crc kubenswrapper[4832]: I0312 15:08:19.934746 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b"} err="failed to get container status \"ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b\": rpc error: code = NotFound desc = could not find container \"ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b\": container with ID starting with ad535f7cd9252c80e6e2ae8eaa3805b0b73dabead16def3b6433a644ae6a495b not found: ID does not exist" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.114680 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.133670 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153127 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153640 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-api" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153660 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-api" Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153673 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-central-agent" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153679 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-central-agent" Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153695 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-httpd" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153701 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-httpd" Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153715 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="sg-core" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153721 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="sg-core" Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153736 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon-log" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153742 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon-log" Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153762 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153769 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153783 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="proxy-httpd" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153791 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="proxy-httpd" Mar 12 15:08:20 crc kubenswrapper[4832]: E0312 15:08:20.153809 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-notification-agent" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.153817 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-notification-agent" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154039 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="sg-core" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154058 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="proxy-httpd" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154074 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-api" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154092 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon-log" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154111 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e7d696-8bfc-49ec-ae8c-6061b9813d16" containerName="neutron-httpd" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154131 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-notification-agent" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154145 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b32181c-0268-4e3e-8b7b-f2811720ce58" containerName="horizon" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.154159 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" containerName="ceilometer-central-agent" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.156100 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.157930 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.158691 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.159954 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.162303 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.235970 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.236036 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-config-data\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.236068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.236095 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-log-httpd\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.236240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.236310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksct\" (UniqueName: \"kubernetes.io/projected/0a7e7258-0293-4dd3-8733-f9436098b523-kube-api-access-dksct\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.236485 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-scripts\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.236679 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-run-httpd\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.337804 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-log-httpd\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.337853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.337880 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dksct\" (UniqueName: \"kubernetes.io/projected/0a7e7258-0293-4dd3-8733-f9436098b523-kube-api-access-dksct\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.337920 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-scripts\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.337941 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-run-httpd\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.338012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.338052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-config-data\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.338077 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.338660 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-run-httpd\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.338873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-log-httpd\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.342719 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.343119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.343553 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.343614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-config-data\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.348788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-scripts\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.361518 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dksct\" (UniqueName: \"kubernetes.io/projected/0a7e7258-0293-4dd3-8733-f9436098b523-kube-api-access-dksct\") pod \"ceilometer-0\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.476621 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.635752 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="129f8e7f-72ac-4e3d-87f0-0c0c43061fbd" path="/var/lib/kubelet/pods/129f8e7f-72ac-4e3d-87f0-0c0c43061fbd/volumes" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.727288 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.759086 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:20 crc kubenswrapper[4832]: W0312 15:08:20.765354 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a7e7258_0293_4dd3_8733_f9436098b523.slice/crio-c9a9fbd8ca427429eac2d05578959916ebbedef257992e60edacbbc02fee0647 WatchSource:0}: Error finding container c9a9fbd8ca427429eac2d05578959916ebbedef257992e60edacbbc02fee0647: Status 404 returned error can't find the container with id c9a9fbd8ca427429eac2d05578959916ebbedef257992e60edacbbc02fee0647 Mar 12 15:08:20 crc kubenswrapper[4832]: I0312 15:08:20.799096 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerStarted","Data":"c9a9fbd8ca427429eac2d05578959916ebbedef257992e60edacbbc02fee0647"} Mar 12 15:08:21 crc kubenswrapper[4832]: I0312 15:08:21.811357 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerStarted","Data":"9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1"} Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.238019 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.238072 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.273164 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.274129 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.274249 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.286995 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.310346 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.323402 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.820576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerStarted","Data":"a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1"} Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.821200 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.821252 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.821285 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:22 crc kubenswrapper[4832]: I0312 15:08:22.821297 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 15:08:23 crc kubenswrapper[4832]: I0312 15:08:23.830089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerStarted","Data":"1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d"} Mar 12 15:08:23 crc kubenswrapper[4832]: I0312 15:08:23.973597 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hkl8h"] Mar 12 15:08:23 crc kubenswrapper[4832]: I0312 15:08:23.975611 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:23 crc kubenswrapper[4832]: I0312 15:08:23.989880 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hkl8h"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.033583 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tpjpw"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.035082 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.044216 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tpjpw"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.107217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-operator-scripts\") pod \"nova-api-db-create-hkl8h\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.107321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4h2z\" (UniqueName: \"kubernetes.io/projected/4d35b44d-107c-41c3-bbb4-02d9059167e5-kube-api-access-q4h2z\") pod \"nova-cell0-db-create-tpjpw\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.107384 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bfl\" (UniqueName: \"kubernetes.io/projected/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-kube-api-access-b4bfl\") pod \"nova-api-db-create-hkl8h\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.107412 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d35b44d-107c-41c3-bbb4-02d9059167e5-operator-scripts\") pod \"nova-cell0-db-create-tpjpw\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.128279 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-10f7-account-create-update-mvmhr"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.129590 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.146765 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.148003 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-10f7-account-create-update-mvmhr"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.208724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4h2z\" (UniqueName: \"kubernetes.io/projected/4d35b44d-107c-41c3-bbb4-02d9059167e5-kube-api-access-q4h2z\") pod \"nova-cell0-db-create-tpjpw\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.208800 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4bfl\" (UniqueName: \"kubernetes.io/projected/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-kube-api-access-b4bfl\") pod \"nova-api-db-create-hkl8h\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.208844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d35b44d-107c-41c3-bbb4-02d9059167e5-operator-scripts\") pod \"nova-cell0-db-create-tpjpw\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.208918 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wkx\" (UniqueName: \"kubernetes.io/projected/15a6d52a-8c4b-4d33-a56f-3173bf227728-kube-api-access-x8wkx\") pod \"nova-api-10f7-account-create-update-mvmhr\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.208953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-operator-scripts\") pod \"nova-api-db-create-hkl8h\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.208988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a6d52a-8c4b-4d33-a56f-3173bf227728-operator-scripts\") pod \"nova-api-10f7-account-create-update-mvmhr\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.209860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-operator-scripts\") pod \"nova-api-db-create-hkl8h\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.209895 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d35b44d-107c-41c3-bbb4-02d9059167e5-operator-scripts\") pod \"nova-cell0-db-create-tpjpw\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.222037 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mddvp"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.225145 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.256952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4h2z\" (UniqueName: \"kubernetes.io/projected/4d35b44d-107c-41c3-bbb4-02d9059167e5-kube-api-access-q4h2z\") pod \"nova-cell0-db-create-tpjpw\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.261773 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mddvp"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.263333 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4bfl\" (UniqueName: \"kubernetes.io/projected/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-kube-api-access-b4bfl\") pod \"nova-api-db-create-hkl8h\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.310962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.312249 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8wkx\" (UniqueName: \"kubernetes.io/projected/15a6d52a-8c4b-4d33-a56f-3173bf227728-kube-api-access-x8wkx\") pod \"nova-api-10f7-account-create-update-mvmhr\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.312310 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a6d52a-8c4b-4d33-a56f-3173bf227728-operator-scripts\") pod \"nova-api-10f7-account-create-update-mvmhr\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.312976 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a6d52a-8c4b-4d33-a56f-3173bf227728-operator-scripts\") pod \"nova-api-10f7-account-create-update-mvmhr\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.328546 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3211-account-create-update-g55fm"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.329718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.333649 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.364899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.375139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8wkx\" (UniqueName: \"kubernetes.io/projected/15a6d52a-8c4b-4d33-a56f-3173bf227728-kube-api-access-x8wkx\") pod \"nova-api-10f7-account-create-update-mvmhr\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.375335 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3211-account-create-update-g55fm"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.413439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwnh\" (UniqueName: \"kubernetes.io/projected/6ea781ce-2058-4f49-b8c5-b0886379887a-kube-api-access-djwnh\") pod \"nova-cell1-db-create-mddvp\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.413963 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ea781ce-2058-4f49-b8c5-b0886379887a-operator-scripts\") pod \"nova-cell1-db-create-mddvp\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.458017 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.533150 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ea781ce-2058-4f49-b8c5-b0886379887a-operator-scripts\") pod \"nova-cell1-db-create-mddvp\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.533217 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwnh\" (UniqueName: \"kubernetes.io/projected/6ea781ce-2058-4f49-b8c5-b0886379887a-kube-api-access-djwnh\") pod \"nova-cell1-db-create-mddvp\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.533310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6df\" (UniqueName: \"kubernetes.io/projected/1922dec6-71fa-4ea1-94da-69bb83431a82-kube-api-access-cx6df\") pod \"nova-cell0-3211-account-create-update-g55fm\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.533329 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1922dec6-71fa-4ea1-94da-69bb83431a82-operator-scripts\") pod \"nova-cell0-3211-account-create-update-g55fm\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.534121 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ea781ce-2058-4f49-b8c5-b0886379887a-operator-scripts\") pod \"nova-cell1-db-create-mddvp\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.591197 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwnh\" (UniqueName: \"kubernetes.io/projected/6ea781ce-2058-4f49-b8c5-b0886379887a-kube-api-access-djwnh\") pod \"nova-cell1-db-create-mddvp\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.619935 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-13fb-account-create-update-qgfdt"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.622749 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.626804 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.638892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx6df\" (UniqueName: \"kubernetes.io/projected/1922dec6-71fa-4ea1-94da-69bb83431a82-kube-api-access-cx6df\") pod \"nova-cell0-3211-account-create-update-g55fm\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.638922 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1922dec6-71fa-4ea1-94da-69bb83431a82-operator-scripts\") pod \"nova-cell0-3211-account-create-update-g55fm\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.639555 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1922dec6-71fa-4ea1-94da-69bb83431a82-operator-scripts\") pod \"nova-cell0-3211-account-create-update-g55fm\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.667210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx6df\" (UniqueName: \"kubernetes.io/projected/1922dec6-71fa-4ea1-94da-69bb83431a82-kube-api-access-cx6df\") pod \"nova-cell0-3211-account-create-update-g55fm\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.695993 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-13fb-account-create-update-qgfdt"] Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.711495 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.740540 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.740948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9n8t\" (UniqueName: \"kubernetes.io/projected/cb14d699-d295-4ed3-ab30-a16b79ec7d94-kube-api-access-w9n8t\") pod \"nova-cell1-13fb-account-create-update-qgfdt\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.741051 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb14d699-d295-4ed3-ab30-a16b79ec7d94-operator-scripts\") pod \"nova-cell1-13fb-account-create-update-qgfdt\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.844739 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb14d699-d295-4ed3-ab30-a16b79ec7d94-operator-scripts\") pod \"nova-cell1-13fb-account-create-update-qgfdt\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.845170 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9n8t\" (UniqueName: \"kubernetes.io/projected/cb14d699-d295-4ed3-ab30-a16b79ec7d94-kube-api-access-w9n8t\") pod \"nova-cell1-13fb-account-create-update-qgfdt\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.846863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb14d699-d295-4ed3-ab30-a16b79ec7d94-operator-scripts\") pod \"nova-cell1-13fb-account-create-update-qgfdt\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:24 crc kubenswrapper[4832]: I0312 15:08:24.868092 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9n8t\" (UniqueName: \"kubernetes.io/projected/cb14d699-d295-4ed3-ab30-a16b79ec7d94-kube-api-access-w9n8t\") pod \"nova-cell1-13fb-account-create-update-qgfdt\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.007688 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hkl8h"] Mar 12 15:08:25 crc kubenswrapper[4832]: W0312 15:08:25.025633 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b1af4f2_c6be_4256_9a21_5df02b6e04c7.slice/crio-e7b181eaa226fb6400bdbabed681e96fc52d210a4590973a35394cc0a7de2f22 WatchSource:0}: Error finding container e7b181eaa226fb6400bdbabed681e96fc52d210a4590973a35394cc0a7de2f22: Status 404 returned error can't find the container with id e7b181eaa226fb6400bdbabed681e96fc52d210a4590973a35394cc0a7de2f22 Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.050350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.149563 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tpjpw"] Mar 12 15:08:25 crc kubenswrapper[4832]: W0312 15:08:25.171237 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d35b44d_107c_41c3_bbb4_02d9059167e5.slice/crio-fa3eb828cda68693bf5af0f020aca16cfbd09851699559eb51519afcff1e1429 WatchSource:0}: Error finding container fa3eb828cda68693bf5af0f020aca16cfbd09851699559eb51519afcff1e1429: Status 404 returned error can't find the container with id fa3eb828cda68693bf5af0f020aca16cfbd09851699559eb51519afcff1e1429 Mar 12 15:08:25 crc kubenswrapper[4832]: W0312 15:08:25.173741 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a6d52a_8c4b_4d33_a56f_3173bf227728.slice/crio-1bbc75721b91002a9775067ebda29679c5aa6e3dc99e689b558e8ceb8f137757 WatchSource:0}: Error finding container 1bbc75721b91002a9775067ebda29679c5aa6e3dc99e689b558e8ceb8f137757: Status 404 returned error can't find the container with id 1bbc75721b91002a9775067ebda29679c5aa6e3dc99e689b558e8ceb8f137757 Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.190546 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-10f7-account-create-update-mvmhr"] Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.293847 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.294235 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.355364 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mddvp"] Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.365957 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3211-account-create-update-g55fm"] Mar 12 15:08:25 crc kubenswrapper[4832]: W0312 15:08:25.382486 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1922dec6_71fa_4ea1_94da_69bb83431a82.slice/crio-febf795347e15002a6bbc30b4e5e1349bfed545ea88f8d131c7f53e3e708afee WatchSource:0}: Error finding container febf795347e15002a6bbc30b4e5e1349bfed545ea88f8d131c7f53e3e708afee: Status 404 returned error can't find the container with id febf795347e15002a6bbc30b4e5e1349bfed545ea88f8d131c7f53e3e708afee Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.516498 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.516627 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.565469 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-13fb-account-create-update-qgfdt"] Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.655887 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.704685 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.764203 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.919645 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3211-account-create-update-g55fm" event={"ID":"1922dec6-71fa-4ea1-94da-69bb83431a82","Type":"ContainerStarted","Data":"902d3a6df6a2599a51d88a82269d66a67ebc93d0e997290c7c04db15d19e5905"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.919709 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3211-account-create-update-g55fm" event={"ID":"1922dec6-71fa-4ea1-94da-69bb83431a82","Type":"ContainerStarted","Data":"febf795347e15002a6bbc30b4e5e1349bfed545ea88f8d131c7f53e3e708afee"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.928151 4832 generic.go:334] "Generic (PLEG): container finished" podID="9b1af4f2-c6be-4256-9a21-5df02b6e04c7" containerID="c920565b45e7ec88ac6065e59f36fca1b7ae24f8614a5544b0dc53dea1af349a" exitCode=0 Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.928209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hkl8h" event={"ID":"9b1af4f2-c6be-4256-9a21-5df02b6e04c7","Type":"ContainerDied","Data":"c920565b45e7ec88ac6065e59f36fca1b7ae24f8614a5544b0dc53dea1af349a"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.928248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hkl8h" event={"ID":"9b1af4f2-c6be-4256-9a21-5df02b6e04c7","Type":"ContainerStarted","Data":"e7b181eaa226fb6400bdbabed681e96fc52d210a4590973a35394cc0a7de2f22"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.929742 4832 generic.go:334] "Generic (PLEG): container finished" podID="4d35b44d-107c-41c3-bbb4-02d9059167e5" containerID="d55a330d4b6f2b8bfa546ce46cb612294af1d148558f7f10e6b46aeb94c7b4f4" exitCode=0 Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.929836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tpjpw" event={"ID":"4d35b44d-107c-41c3-bbb4-02d9059167e5","Type":"ContainerDied","Data":"d55a330d4b6f2b8bfa546ce46cb612294af1d148558f7f10e6b46aeb94c7b4f4"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.929864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tpjpw" event={"ID":"4d35b44d-107c-41c3-bbb4-02d9059167e5","Type":"ContainerStarted","Data":"fa3eb828cda68693bf5af0f020aca16cfbd09851699559eb51519afcff1e1429"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.937332 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" event={"ID":"cb14d699-d295-4ed3-ab30-a16b79ec7d94","Type":"ContainerStarted","Data":"f50b2911f46357ecb0bcac86da8aab2ee5992091d1d19ba214a42d79edf7ab09"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.939941 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-3211-account-create-update-g55fm" podStartSLOduration=1.939931845 podStartE2EDuration="1.939931845s" podCreationTimestamp="2026-03-12 15:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:08:25.931996397 +0000 UTC m=+1264.576010623" watchObservedRunningTime="2026-03-12 15:08:25.939931845 +0000 UTC m=+1264.583946071" Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.952757 4832 generic.go:334] "Generic (PLEG): container finished" podID="15a6d52a-8c4b-4d33-a56f-3173bf227728" containerID="0ab8f76bdd7141ee772ca8f1c3b8b3c341cae1b5a641fad21588306f601568e0" exitCode=0 Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.952956 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-10f7-account-create-update-mvmhr" event={"ID":"15a6d52a-8c4b-4d33-a56f-3173bf227728","Type":"ContainerDied","Data":"0ab8f76bdd7141ee772ca8f1c3b8b3c341cae1b5a641fad21588306f601568e0"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.953012 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-10f7-account-create-update-mvmhr" event={"ID":"15a6d52a-8c4b-4d33-a56f-3173bf227728","Type":"ContainerStarted","Data":"1bbc75721b91002a9775067ebda29679c5aa6e3dc99e689b558e8ceb8f137757"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.954795 4832 generic.go:334] "Generic (PLEG): container finished" podID="6ea781ce-2058-4f49-b8c5-b0886379887a" containerID="34bdd30b51e980a430daa5ba4a185519ba7c893c0d23ad30fddb2d795513f930" exitCode=0 Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.954870 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mddvp" event={"ID":"6ea781ce-2058-4f49-b8c5-b0886379887a","Type":"ContainerDied","Data":"34bdd30b51e980a430daa5ba4a185519ba7c893c0d23ad30fddb2d795513f930"} Mar 12 15:08:25 crc kubenswrapper[4832]: I0312 15:08:25.954929 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mddvp" event={"ID":"6ea781ce-2058-4f49-b8c5-b0886379887a","Type":"ContainerStarted","Data":"7a3c466f34a1c4af24f4426db72deb187ea648b4646f42b9772a0bfa50e80ab1"} Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.966105 4832 generic.go:334] "Generic (PLEG): container finished" podID="cb14d699-d295-4ed3-ab30-a16b79ec7d94" containerID="0cc0de4a635656a0ffb465ab66ddc9bd6a66ef1dc6f546e13d578aa8aca59ed8" exitCode=0 Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.966177 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" event={"ID":"cb14d699-d295-4ed3-ab30-a16b79ec7d94","Type":"ContainerDied","Data":"0cc0de4a635656a0ffb465ab66ddc9bd6a66ef1dc6f546e13d578aa8aca59ed8"} Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.968062 4832 generic.go:334] "Generic (PLEG): container finished" podID="1922dec6-71fa-4ea1-94da-69bb83431a82" containerID="902d3a6df6a2599a51d88a82269d66a67ebc93d0e997290c7c04db15d19e5905" exitCode=0 Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.968135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3211-account-create-update-g55fm" event={"ID":"1922dec6-71fa-4ea1-94da-69bb83431a82","Type":"ContainerDied","Data":"902d3a6df6a2599a51d88a82269d66a67ebc93d0e997290c7c04db15d19e5905"} Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.970897 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerStarted","Data":"b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a"} Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.971199 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="proxy-httpd" containerID="cri-o://b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a" gracePeriod=30 Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.971219 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-notification-agent" containerID="cri-o://a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1" gracePeriod=30 Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.971231 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="sg-core" containerID="cri-o://1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d" gracePeriod=30 Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.971315 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-central-agent" containerID="cri-o://9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1" gracePeriod=30 Mar 12 15:08:26 crc kubenswrapper[4832]: I0312 15:08:26.971354 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.017074 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.206654767 podStartE2EDuration="7.017056141s" podCreationTimestamp="2026-03-12 15:08:20 +0000 UTC" firstStartedPulling="2026-03-12 15:08:20.769202216 +0000 UTC m=+1259.413216482" lastFinishedPulling="2026-03-12 15:08:25.57960363 +0000 UTC m=+1264.223617856" observedRunningTime="2026-03-12 15:08:27.005722045 +0000 UTC m=+1265.649736311" watchObservedRunningTime="2026-03-12 15:08:27.017056141 +0000 UTC m=+1265.661070367" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.455788 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.530376 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.556824 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.568773 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.601375 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwnh\" (UniqueName: \"kubernetes.io/projected/6ea781ce-2058-4f49-b8c5-b0886379887a-kube-api-access-djwnh\") pod \"6ea781ce-2058-4f49-b8c5-b0886379887a\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.601537 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4bfl\" (UniqueName: \"kubernetes.io/projected/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-kube-api-access-b4bfl\") pod \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.601561 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ea781ce-2058-4f49-b8c5-b0886379887a-operator-scripts\") pod \"6ea781ce-2058-4f49-b8c5-b0886379887a\" (UID: \"6ea781ce-2058-4f49-b8c5-b0886379887a\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.601649 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-operator-scripts\") pod \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\" (UID: \"9b1af4f2-c6be-4256-9a21-5df02b6e04c7\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.602471 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea781ce-2058-4f49-b8c5-b0886379887a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ea781ce-2058-4f49-b8c5-b0886379887a" (UID: "6ea781ce-2058-4f49-b8c5-b0886379887a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.602475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b1af4f2-c6be-4256-9a21-5df02b6e04c7" (UID: "9b1af4f2-c6be-4256-9a21-5df02b6e04c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.609097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea781ce-2058-4f49-b8c5-b0886379887a-kube-api-access-djwnh" (OuterVolumeSpecName: "kube-api-access-djwnh") pod "6ea781ce-2058-4f49-b8c5-b0886379887a" (UID: "6ea781ce-2058-4f49-b8c5-b0886379887a"). InnerVolumeSpecName "kube-api-access-djwnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.609140 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-kube-api-access-b4bfl" (OuterVolumeSpecName: "kube-api-access-b4bfl") pod "9b1af4f2-c6be-4256-9a21-5df02b6e04c7" (UID: "9b1af4f2-c6be-4256-9a21-5df02b6e04c7"). InnerVolumeSpecName "kube-api-access-b4bfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.703354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4h2z\" (UniqueName: \"kubernetes.io/projected/4d35b44d-107c-41c3-bbb4-02d9059167e5-kube-api-access-q4h2z\") pod \"4d35b44d-107c-41c3-bbb4-02d9059167e5\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.703626 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d35b44d-107c-41c3-bbb4-02d9059167e5-operator-scripts\") pod \"4d35b44d-107c-41c3-bbb4-02d9059167e5\" (UID: \"4d35b44d-107c-41c3-bbb4-02d9059167e5\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.703707 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8wkx\" (UniqueName: \"kubernetes.io/projected/15a6d52a-8c4b-4d33-a56f-3173bf227728-kube-api-access-x8wkx\") pod \"15a6d52a-8c4b-4d33-a56f-3173bf227728\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.703826 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a6d52a-8c4b-4d33-a56f-3173bf227728-operator-scripts\") pod \"15a6d52a-8c4b-4d33-a56f-3173bf227728\" (UID: \"15a6d52a-8c4b-4d33-a56f-3173bf227728\") " Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.704491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a6d52a-8c4b-4d33-a56f-3173bf227728-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15a6d52a-8c4b-4d33-a56f-3173bf227728" (UID: "15a6d52a-8c4b-4d33-a56f-3173bf227728"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.704610 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d35b44d-107c-41c3-bbb4-02d9059167e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d35b44d-107c-41c3-bbb4-02d9059167e5" (UID: "4d35b44d-107c-41c3-bbb4-02d9059167e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.706572 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.706593 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a6d52a-8c4b-4d33-a56f-3173bf227728-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.706603 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwnh\" (UniqueName: \"kubernetes.io/projected/6ea781ce-2058-4f49-b8c5-b0886379887a-kube-api-access-djwnh\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.706614 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4bfl\" (UniqueName: \"kubernetes.io/projected/9b1af4f2-c6be-4256-9a21-5df02b6e04c7-kube-api-access-b4bfl\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.706624 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ea781ce-2058-4f49-b8c5-b0886379887a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.706633 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d35b44d-107c-41c3-bbb4-02d9059167e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.706765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a6d52a-8c4b-4d33-a56f-3173bf227728-kube-api-access-x8wkx" (OuterVolumeSpecName: "kube-api-access-x8wkx") pod "15a6d52a-8c4b-4d33-a56f-3173bf227728" (UID: "15a6d52a-8c4b-4d33-a56f-3173bf227728"). InnerVolumeSpecName "kube-api-access-x8wkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.708196 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d35b44d-107c-41c3-bbb4-02d9059167e5-kube-api-access-q4h2z" (OuterVolumeSpecName: "kube-api-access-q4h2z") pod "4d35b44d-107c-41c3-bbb4-02d9059167e5" (UID: "4d35b44d-107c-41c3-bbb4-02d9059167e5"). InnerVolumeSpecName "kube-api-access-q4h2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.808488 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8wkx\" (UniqueName: \"kubernetes.io/projected/15a6d52a-8c4b-4d33-a56f-3173bf227728-kube-api-access-x8wkx\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.808542 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4h2z\" (UniqueName: \"kubernetes.io/projected/4d35b44d-107c-41c3-bbb4-02d9059167e5-kube-api-access-q4h2z\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.983266 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hkl8h" event={"ID":"9b1af4f2-c6be-4256-9a21-5df02b6e04c7","Type":"ContainerDied","Data":"e7b181eaa226fb6400bdbabed681e96fc52d210a4590973a35394cc0a7de2f22"} Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.983327 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b181eaa226fb6400bdbabed681e96fc52d210a4590973a35394cc0a7de2f22" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.983408 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hkl8h" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.997154 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tpjpw" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.997665 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tpjpw" event={"ID":"4d35b44d-107c-41c3-bbb4-02d9059167e5","Type":"ContainerDied","Data":"fa3eb828cda68693bf5af0f020aca16cfbd09851699559eb51519afcff1e1429"} Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.997695 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa3eb828cda68693bf5af0f020aca16cfbd09851699559eb51519afcff1e1429" Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.999885 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a7e7258-0293-4dd3-8733-f9436098b523" containerID="b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a" exitCode=0 Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.999907 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a7e7258-0293-4dd3-8733-f9436098b523" containerID="1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d" exitCode=2 Mar 12 15:08:27 crc kubenswrapper[4832]: I0312 15:08:27.999915 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a7e7258-0293-4dd3-8733-f9436098b523" containerID="a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1" exitCode=0 Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:27.999943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerDied","Data":"b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a"} Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:27.999960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerDied","Data":"1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d"} Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:27.999971 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerDied","Data":"a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1"} Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.001621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-10f7-account-create-update-mvmhr" event={"ID":"15a6d52a-8c4b-4d33-a56f-3173bf227728","Type":"ContainerDied","Data":"1bbc75721b91002a9775067ebda29679c5aa6e3dc99e689b558e8ceb8f137757"} Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.001654 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbc75721b91002a9775067ebda29679c5aa6e3dc99e689b558e8ceb8f137757" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.001702 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-10f7-account-create-update-mvmhr" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.010023 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mddvp" event={"ID":"6ea781ce-2058-4f49-b8c5-b0886379887a","Type":"ContainerDied","Data":"7a3c466f34a1c4af24f4426db72deb187ea648b4646f42b9772a0bfa50e80ab1"} Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.010226 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a3c466f34a1c4af24f4426db72deb187ea648b4646f42b9772a0bfa50e80ab1" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.010239 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mddvp" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.448558 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.454146 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.533382 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9n8t\" (UniqueName: \"kubernetes.io/projected/cb14d699-d295-4ed3-ab30-a16b79ec7d94-kube-api-access-w9n8t\") pod \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.533478 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb14d699-d295-4ed3-ab30-a16b79ec7d94-operator-scripts\") pod \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\" (UID: \"cb14d699-d295-4ed3-ab30-a16b79ec7d94\") " Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.533604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1922dec6-71fa-4ea1-94da-69bb83431a82-operator-scripts\") pod \"1922dec6-71fa-4ea1-94da-69bb83431a82\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.533667 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx6df\" (UniqueName: \"kubernetes.io/projected/1922dec6-71fa-4ea1-94da-69bb83431a82-kube-api-access-cx6df\") pod \"1922dec6-71fa-4ea1-94da-69bb83431a82\" (UID: \"1922dec6-71fa-4ea1-94da-69bb83431a82\") " Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.534791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb14d699-d295-4ed3-ab30-a16b79ec7d94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb14d699-d295-4ed3-ab30-a16b79ec7d94" (UID: "cb14d699-d295-4ed3-ab30-a16b79ec7d94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.534852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1922dec6-71fa-4ea1-94da-69bb83431a82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1922dec6-71fa-4ea1-94da-69bb83431a82" (UID: "1922dec6-71fa-4ea1-94da-69bb83431a82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.539123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1922dec6-71fa-4ea1-94da-69bb83431a82-kube-api-access-cx6df" (OuterVolumeSpecName: "kube-api-access-cx6df") pod "1922dec6-71fa-4ea1-94da-69bb83431a82" (UID: "1922dec6-71fa-4ea1-94da-69bb83431a82"). InnerVolumeSpecName "kube-api-access-cx6df". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.556588 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb14d699-d295-4ed3-ab30-a16b79ec7d94-kube-api-access-w9n8t" (OuterVolumeSpecName: "kube-api-access-w9n8t") pod "cb14d699-d295-4ed3-ab30-a16b79ec7d94" (UID: "cb14d699-d295-4ed3-ab30-a16b79ec7d94"). InnerVolumeSpecName "kube-api-access-w9n8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.638725 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1922dec6-71fa-4ea1-94da-69bb83431a82-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.638761 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx6df\" (UniqueName: \"kubernetes.io/projected/1922dec6-71fa-4ea1-94da-69bb83431a82-kube-api-access-cx6df\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.638777 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9n8t\" (UniqueName: \"kubernetes.io/projected/cb14d699-d295-4ed3-ab30-a16b79ec7d94-kube-api-access-w9n8t\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:28 crc kubenswrapper[4832]: I0312 15:08:28.638789 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb14d699-d295-4ed3-ab30-a16b79ec7d94-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:29 crc kubenswrapper[4832]: I0312 15:08:29.030201 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" event={"ID":"cb14d699-d295-4ed3-ab30-a16b79ec7d94","Type":"ContainerDied","Data":"f50b2911f46357ecb0bcac86da8aab2ee5992091d1d19ba214a42d79edf7ab09"} Mar 12 15:08:29 crc kubenswrapper[4832]: I0312 15:08:29.030680 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50b2911f46357ecb0bcac86da8aab2ee5992091d1d19ba214a42d79edf7ab09" Mar 12 15:08:29 crc kubenswrapper[4832]: I0312 15:08:29.030222 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-13fb-account-create-update-qgfdt" Mar 12 15:08:29 crc kubenswrapper[4832]: I0312 15:08:29.034054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3211-account-create-update-g55fm" event={"ID":"1922dec6-71fa-4ea1-94da-69bb83431a82","Type":"ContainerDied","Data":"febf795347e15002a6bbc30b4e5e1349bfed545ea88f8d131c7f53e3e708afee"} Mar 12 15:08:29 crc kubenswrapper[4832]: I0312 15:08:29.034102 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="febf795347e15002a6bbc30b4e5e1349bfed545ea88f8d131c7f53e3e708afee" Mar 12 15:08:29 crc kubenswrapper[4832]: I0312 15:08:29.034117 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3211-account-create-update-g55fm" Mar 12 15:08:29 crc kubenswrapper[4832]: I0312 15:08:29.936002 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.049220 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a7e7258-0293-4dd3-8733-f9436098b523" containerID="9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1" exitCode=0 Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.049564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerDied","Data":"9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1"} Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.049593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a7e7258-0293-4dd3-8733-f9436098b523","Type":"ContainerDied","Data":"c9a9fbd8ca427429eac2d05578959916ebbedef257992e60edacbbc02fee0647"} Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.049611 4832 scope.go:117] "RemoveContainer" containerID="b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.049745 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062186 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-sg-core-conf-yaml\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-ceilometer-tls-certs\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062277 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-config-data\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062312 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-log-httpd\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-combined-ca-bundle\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062379 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-run-httpd\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dksct\" (UniqueName: \"kubernetes.io/projected/0a7e7258-0293-4dd3-8733-f9436098b523-kube-api-access-dksct\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.062471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-scripts\") pod \"0a7e7258-0293-4dd3-8733-f9436098b523\" (UID: \"0a7e7258-0293-4dd3-8733-f9436098b523\") " Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.066068 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.067490 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.075822 4832 scope.go:117] "RemoveContainer" containerID="1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.082364 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-scripts" (OuterVolumeSpecName: "scripts") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.094702 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7e7258-0293-4dd3-8733-f9436098b523-kube-api-access-dksct" (OuterVolumeSpecName: "kube-api-access-dksct") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "kube-api-access-dksct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.111664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.169658 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.169691 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.169702 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a7e7258-0293-4dd3-8733-f9436098b523-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.169711 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dksct\" (UniqueName: \"kubernetes.io/projected/0a7e7258-0293-4dd3-8733-f9436098b523-kube-api-access-dksct\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.169719 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.171701 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.196663 4832 scope.go:117] "RemoveContainer" containerID="a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.200357 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.234024 4832 scope.go:117] "RemoveContainer" containerID="9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.251623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-config-data" (OuterVolumeSpecName: "config-data") pod "0a7e7258-0293-4dd3-8733-f9436098b523" (UID: "0a7e7258-0293-4dd3-8733-f9436098b523"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.271685 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.271958 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.271967 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7e7258-0293-4dd3-8733-f9436098b523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.288575 4832 scope.go:117] "RemoveContainer" containerID="b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.291218 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a\": container with ID starting with b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a not found: ID does not exist" containerID="b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.291249 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a"} err="failed to get container status \"b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a\": rpc error: code = NotFound desc = could not find container \"b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a\": container with ID starting with b888a43cbdee8e15b4bf7fd57d5fe900b04d561338dc0e144287bdd01125971a not found: ID does not exist" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.291270 4832 scope.go:117] "RemoveContainer" containerID="1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.293014 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d\": container with ID starting with 1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d not found: ID does not exist" containerID="1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.293048 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d"} err="failed to get container status \"1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d\": rpc error: code = NotFound desc = could not find container \"1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d\": container with ID starting with 1f676224f1ce956a5766c24b9ea42b04e47fee466501be816798b7194deb580d not found: ID does not exist" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.293063 4832 scope.go:117] "RemoveContainer" containerID="a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.293304 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1\": container with ID starting with a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1 not found: ID does not exist" containerID="a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.293327 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1"} err="failed to get container status \"a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1\": rpc error: code = NotFound desc = could not find container \"a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1\": container with ID starting with a502c6c364ba24446e41112936589e89dfc6c88e18f17f2493af32a3f0a869d1 not found: ID does not exist" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.293351 4832 scope.go:117] "RemoveContainer" containerID="9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.293558 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1\": container with ID starting with 9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1 not found: ID does not exist" containerID="9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.293579 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1"} err="failed to get container status \"9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1\": rpc error: code = NotFound desc = could not find container \"9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1\": container with ID starting with 9539e8e2458bd9a7c5c58a91341071efc686e5d5a046a3ec0b007dec25892be1 not found: ID does not exist" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.378631 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.386884 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.407961 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408322 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb14d699-d295-4ed3-ab30-a16b79ec7d94" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408342 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb14d699-d295-4ed3-ab30-a16b79ec7d94" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408352 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d35b44d-107c-41c3-bbb4-02d9059167e5" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408359 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d35b44d-107c-41c3-bbb4-02d9059167e5" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408377 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1af4f2-c6be-4256-9a21-5df02b6e04c7" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408383 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1af4f2-c6be-4256-9a21-5df02b6e04c7" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408396 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="proxy-httpd" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408401 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="proxy-httpd" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408412 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea781ce-2058-4f49-b8c5-b0886379887a" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408418 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea781ce-2058-4f49-b8c5-b0886379887a" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408427 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a6d52a-8c4b-4d33-a56f-3173bf227728" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408433 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a6d52a-8c4b-4d33-a56f-3173bf227728" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408450 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-notification-agent" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408456 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-notification-agent" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408466 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="sg-core" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408472 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="sg-core" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408485 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1922dec6-71fa-4ea1-94da-69bb83431a82" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408492 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1922dec6-71fa-4ea1-94da-69bb83431a82" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: E0312 15:08:30.408523 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-central-agent" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408529 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-central-agent" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408691 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1af4f2-c6be-4256-9a21-5df02b6e04c7" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408705 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1922dec6-71fa-4ea1-94da-69bb83431a82" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408716 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb14d699-d295-4ed3-ab30-a16b79ec7d94" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408727 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-notification-agent" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408734 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d35b44d-107c-41c3-bbb4-02d9059167e5" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408745 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="proxy-httpd" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408753 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a6d52a-8c4b-4d33-a56f-3173bf227728" containerName="mariadb-account-create-update" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408760 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ea781ce-2058-4f49-b8c5-b0886379887a" containerName="mariadb-database-create" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408772 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="ceilometer-central-agent" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.408779 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" containerName="sg-core" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.410239 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.415156 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.417911 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.425005 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.429421 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.474388 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.474523 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-run-httpd\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.474596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-config-data\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.474622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-scripts\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.474697 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.474824 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.474926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6qc\" (UniqueName: \"kubernetes.io/projected/5497a102-9c08-4111-b686-3e1762c474da-kube-api-access-lp6qc\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.475010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-log-httpd\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-run-httpd\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-config-data\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-scripts\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576621 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6qc\" (UniqueName: \"kubernetes.io/projected/5497a102-9c08-4111-b686-3e1762c474da-kube-api-access-lp6qc\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-log-httpd\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.576746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.577892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-run-httpd\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.578195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-log-httpd\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.581475 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.582002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.582429 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-scripts\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.583775 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.584558 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-config-data\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.604295 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6qc\" (UniqueName: \"kubernetes.io/projected/5497a102-9c08-4111-b686-3e1762c474da-kube-api-access-lp6qc\") pod \"ceilometer-0\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " pod="openstack/ceilometer-0" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.644367 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7e7258-0293-4dd3-8733-f9436098b523" path="/var/lib/kubelet/pods/0a7e7258-0293-4dd3-8733-f9436098b523/volumes" Mar 12 15:08:30 crc kubenswrapper[4832]: I0312 15:08:30.746338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:08:31 crc kubenswrapper[4832]: I0312 15:08:31.198407 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:08:31 crc kubenswrapper[4832]: W0312 15:08:31.200740 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5497a102_9c08_4111_b686_3e1762c474da.slice/crio-8257e202f308566ddad0987331b895a3455c81674655799bfab8de74a3a1dcb7 WatchSource:0}: Error finding container 8257e202f308566ddad0987331b895a3455c81674655799bfab8de74a3a1dcb7: Status 404 returned error can't find the container with id 8257e202f308566ddad0987331b895a3455c81674655799bfab8de74a3a1dcb7 Mar 12 15:08:32 crc kubenswrapper[4832]: I0312 15:08:32.070212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerStarted","Data":"8257e202f308566ddad0987331b895a3455c81674655799bfab8de74a3a1dcb7"} Mar 12 15:08:33 crc kubenswrapper[4832]: I0312 15:08:33.083060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerStarted","Data":"0c7af4ef60a9f01361e6aba671d9801a37cb73ee144f7b0a8f5e1b65a78c8c67"} Mar 12 15:08:33 crc kubenswrapper[4832]: I0312 15:08:33.083380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerStarted","Data":"104579fa01e7c5c9632ba5262ddfed4799c138f2bb4f223243809ffac685d5e3"} Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.592165 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rclwl"] Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.608227 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.625763 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.625766 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wdt7d" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.628743 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.650773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-config-data\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.650896 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.650947 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sctwz\" (UniqueName: \"kubernetes.io/projected/0a680707-4129-474a-8c85-5395a10c821b-kube-api-access-sctwz\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.650965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-scripts\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.654767 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rclwl"] Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.752428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sctwz\" (UniqueName: \"kubernetes.io/projected/0a680707-4129-474a-8c85-5395a10c821b-kube-api-access-sctwz\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.752481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-scripts\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.752585 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-config-data\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.752676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.762618 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.830899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-config-data\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.832815 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-scripts\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.838239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sctwz\" (UniqueName: \"kubernetes.io/projected/0a680707-4129-474a-8c85-5395a10c821b-kube-api-access-sctwz\") pod \"nova-cell0-conductor-db-sync-rclwl\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:34 crc kubenswrapper[4832]: I0312 15:08:34.941834 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:35 crc kubenswrapper[4832]: I0312 15:08:35.111410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerStarted","Data":"83c477df0d99d09ffd7c8068ce81c1c9330ffa48070d942cc437b092560a0945"} Mar 12 15:08:35 crc kubenswrapper[4832]: W0312 15:08:35.387004 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a680707_4129_474a_8c85_5395a10c821b.slice/crio-159e06a16603a0ed0fd5edb6e94df224689267289beb8a35fb4b2905637d1e5f WatchSource:0}: Error finding container 159e06a16603a0ed0fd5edb6e94df224689267289beb8a35fb4b2905637d1e5f: Status 404 returned error can't find the container with id 159e06a16603a0ed0fd5edb6e94df224689267289beb8a35fb4b2905637d1e5f Mar 12 15:08:35 crc kubenswrapper[4832]: I0312 15:08:35.387416 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rclwl"] Mar 12 15:08:36 crc kubenswrapper[4832]: I0312 15:08:36.122370 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rclwl" event={"ID":"0a680707-4129-474a-8c85-5395a10c821b","Type":"ContainerStarted","Data":"159e06a16603a0ed0fd5edb6e94df224689267289beb8a35fb4b2905637d1e5f"} Mar 12 15:08:43 crc kubenswrapper[4832]: I0312 15:08:43.263386 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rclwl" event={"ID":"0a680707-4129-474a-8c85-5395a10c821b","Type":"ContainerStarted","Data":"0237b3eedd9b067551e433623895b3095d12df6e6f9e67aeb892067a6bee98d6"} Mar 12 15:08:43 crc kubenswrapper[4832]: I0312 15:08:43.295530 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rclwl" podStartSLOduration=2.4540661679999998 podStartE2EDuration="9.295500142s" podCreationTimestamp="2026-03-12 15:08:34 +0000 UTC" firstStartedPulling="2026-03-12 15:08:35.389128594 +0000 UTC m=+1274.033142820" lastFinishedPulling="2026-03-12 15:08:42.230562568 +0000 UTC m=+1280.874576794" observedRunningTime="2026-03-12 15:08:43.283977052 +0000 UTC m=+1281.927991278" watchObservedRunningTime="2026-03-12 15:08:43.295500142 +0000 UTC m=+1281.939514368" Mar 12 15:08:46 crc kubenswrapper[4832]: I0312 15:08:46.309464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerStarted","Data":"46b867f589c822dd12887938b76d411d93efdc96759a0db21ef32cc48f28e74f"} Mar 12 15:08:46 crc kubenswrapper[4832]: I0312 15:08:46.310217 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 15:08:46 crc kubenswrapper[4832]: I0312 15:08:46.362212 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.102761228 podStartE2EDuration="16.362178275s" podCreationTimestamp="2026-03-12 15:08:30 +0000 UTC" firstStartedPulling="2026-03-12 15:08:31.203256207 +0000 UTC m=+1269.847270443" lastFinishedPulling="2026-03-12 15:08:45.462673234 +0000 UTC m=+1284.106687490" observedRunningTime="2026-03-12 15:08:46.345839746 +0000 UTC m=+1284.989854022" watchObservedRunningTime="2026-03-12 15:08:46.362178275 +0000 UTC m=+1285.006192551" Mar 12 15:08:47 crc kubenswrapper[4832]: I0312 15:08:47.301239 4832 scope.go:117] "RemoveContainer" containerID="d4746c978b13da39a08686d05c6de4035f325bfd76850bc79cfdd294b83d442a" Mar 12 15:08:52 crc kubenswrapper[4832]: I0312 15:08:52.411630 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a680707-4129-474a-8c85-5395a10c821b" containerID="0237b3eedd9b067551e433623895b3095d12df6e6f9e67aeb892067a6bee98d6" exitCode=0 Mar 12 15:08:52 crc kubenswrapper[4832]: I0312 15:08:52.411734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rclwl" event={"ID":"0a680707-4129-474a-8c85-5395a10c821b","Type":"ContainerDied","Data":"0237b3eedd9b067551e433623895b3095d12df6e6f9e67aeb892067a6bee98d6"} Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.811061 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.859155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-config-data\") pod \"0a680707-4129-474a-8c85-5395a10c821b\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.859231 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-combined-ca-bundle\") pod \"0a680707-4129-474a-8c85-5395a10c821b\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.859369 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-scripts\") pod \"0a680707-4129-474a-8c85-5395a10c821b\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.859442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sctwz\" (UniqueName: \"kubernetes.io/projected/0a680707-4129-474a-8c85-5395a10c821b-kube-api-access-sctwz\") pod \"0a680707-4129-474a-8c85-5395a10c821b\" (UID: \"0a680707-4129-474a-8c85-5395a10c821b\") " Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.866904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a680707-4129-474a-8c85-5395a10c821b-kube-api-access-sctwz" (OuterVolumeSpecName: "kube-api-access-sctwz") pod "0a680707-4129-474a-8c85-5395a10c821b" (UID: "0a680707-4129-474a-8c85-5395a10c821b"). InnerVolumeSpecName "kube-api-access-sctwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.867054 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-scripts" (OuterVolumeSpecName: "scripts") pod "0a680707-4129-474a-8c85-5395a10c821b" (UID: "0a680707-4129-474a-8c85-5395a10c821b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.883396 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-config-data" (OuterVolumeSpecName: "config-data") pod "0a680707-4129-474a-8c85-5395a10c821b" (UID: "0a680707-4129-474a-8c85-5395a10c821b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.897598 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a680707-4129-474a-8c85-5395a10c821b" (UID: "0a680707-4129-474a-8c85-5395a10c821b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.961319 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sctwz\" (UniqueName: \"kubernetes.io/projected/0a680707-4129-474a-8c85-5395a10c821b-kube-api-access-sctwz\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.961723 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.961736 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:53 crc kubenswrapper[4832]: I0312 15:08:53.961747 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a680707-4129-474a-8c85-5395a10c821b-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.440911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rclwl" event={"ID":"0a680707-4129-474a-8c85-5395a10c821b","Type":"ContainerDied","Data":"159e06a16603a0ed0fd5edb6e94df224689267289beb8a35fb4b2905637d1e5f"} Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.440971 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159e06a16603a0ed0fd5edb6e94df224689267289beb8a35fb4b2905637d1e5f" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.441036 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rclwl" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.613819 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 15:08:54 crc kubenswrapper[4832]: E0312 15:08:54.614592 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a680707-4129-474a-8c85-5395a10c821b" containerName="nova-cell0-conductor-db-sync" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.614637 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a680707-4129-474a-8c85-5395a10c821b" containerName="nova-cell0-conductor-db-sync" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.615120 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a680707-4129-474a-8c85-5395a10c821b" containerName="nova-cell0-conductor-db-sync" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.616426 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.627253 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.627326 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wdt7d" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.645112 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.684648 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.684991 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.685115 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4dd\" (UniqueName: \"kubernetes.io/projected/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-kube-api-access-lm4dd\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.787227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.787323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.787359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4dd\" (UniqueName: \"kubernetes.io/projected/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-kube-api-access-lm4dd\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.793017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.801311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.810713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm4dd\" (UniqueName: \"kubernetes.io/projected/d4d9828b-3b93-4cb1-a0bc-794a23c11f07-kube-api-access-lm4dd\") pod \"nova-cell0-conductor-0\" (UID: \"d4d9828b-3b93-4cb1-a0bc-794a23c11f07\") " pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:54 crc kubenswrapper[4832]: I0312 15:08:54.962032 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:55 crc kubenswrapper[4832]: I0312 15:08:55.458180 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 15:08:56 crc kubenswrapper[4832]: I0312 15:08:56.470106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4d9828b-3b93-4cb1-a0bc-794a23c11f07","Type":"ContainerStarted","Data":"a90e921e068611a998bbaa1fea6a1111f7a8243999cda180b8b2834fa4d37ec2"} Mar 12 15:08:56 crc kubenswrapper[4832]: I0312 15:08:56.470158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4d9828b-3b93-4cb1-a0bc-794a23c11f07","Type":"ContainerStarted","Data":"cbe98c3fdbc4dc13cf3839f214a304b3b6882e0d384d5b9fe03dbead6947f690"} Mar 12 15:08:56 crc kubenswrapper[4832]: I0312 15:08:56.470303 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 15:08:56 crc kubenswrapper[4832]: I0312 15:08:56.487563 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.487479873 podStartE2EDuration="2.487479873s" podCreationTimestamp="2026-03-12 15:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:08:56.485039993 +0000 UTC m=+1295.129054229" watchObservedRunningTime="2026-03-12 15:08:56.487479873 +0000 UTC m=+1295.131494119" Mar 12 15:09:00 crc kubenswrapper[4832]: I0312 15:09:00.757602 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.010211 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.563468 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-v4wwj"] Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.564859 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.567927 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.570633 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.579313 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v4wwj"] Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.608234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-scripts\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.608338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-config-data\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.608380 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqv9z\" (UniqueName: \"kubernetes.io/projected/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-kube-api-access-hqv9z\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.608447 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.713641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-config-data\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.713739 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqv9z\" (UniqueName: \"kubernetes.io/projected/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-kube-api-access-hqv9z\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.713881 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.713980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-scripts\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.723905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-config-data\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.738822 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-scripts\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.748865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqv9z\" (UniqueName: \"kubernetes.io/projected/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-kube-api-access-hqv9z\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.749312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v4wwj\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.772818 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.774703 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.783693 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.816240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ded7d78-c253-41e0-a2a3-5ce289c98a72-logs\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.816277 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4q6\" (UniqueName: \"kubernetes.io/projected/6ded7d78-c253-41e0-a2a3-5ce289c98a72-kube-api-access-bs4q6\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.816332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-config-data\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.816402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.845782 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.910140 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.918225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ded7d78-c253-41e0-a2a3-5ce289c98a72-logs\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.918314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4q6\" (UniqueName: \"kubernetes.io/projected/6ded7d78-c253-41e0-a2a3-5ce289c98a72-kube-api-access-bs4q6\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.918533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-config-data\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.918806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.929185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ded7d78-c253-41e0-a2a3-5ce289c98a72-logs\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.930927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.930982 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.932754 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.950350 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-config-data\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.952101 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.959883 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.961994 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.972914 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 15:09:05 crc kubenswrapper[4832]: I0312 15:09:05.983223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.000973 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.024523 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.024573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.024620 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fjth\" (UniqueName: \"kubernetes.io/projected/6108009e-5c71-4abd-8f54-1eaad2158264-kube-api-access-5fjth\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.024661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.024711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxgl\" (UniqueName: \"kubernetes.io/projected/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-kube-api-access-4nxgl\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.024731 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-config-data\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.024763 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6108009e-5c71-4abd-8f54-1eaad2158264-logs\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.065098 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4q6\" (UniqueName: \"kubernetes.io/projected/6ded7d78-c253-41e0-a2a3-5ce289c98a72-kube-api-access-bs4q6\") pod \"nova-api-0\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " pod="openstack/nova-api-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.126344 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6108009e-5c71-4abd-8f54-1eaad2158264-logs\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.126456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.126492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.126573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fjth\" (UniqueName: \"kubernetes.io/projected/6108009e-5c71-4abd-8f54-1eaad2158264-kube-api-access-5fjth\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.126614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.126682 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxgl\" (UniqueName: \"kubernetes.io/projected/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-kube-api-access-4nxgl\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.126711 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-config-data\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.131016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6108009e-5c71-4abd-8f54-1eaad2158264-logs\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.134030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-config-data\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.134706 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.136896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.140220 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.140832 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.191349 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxgl\" (UniqueName: \"kubernetes.io/projected/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-kube-api-access-4nxgl\") pod \"nova-cell1-novncproxy-0\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.192059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fjth\" (UniqueName: \"kubernetes.io/projected/6108009e-5c71-4abd-8f54-1eaad2158264-kube-api-access-5fjth\") pod \"nova-metadata-0\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.199577 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.200827 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.206720 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.230754 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-hng8b"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.232531 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.237834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5x5\" (UniqueName: \"kubernetes.io/projected/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-kube-api-access-ct5x5\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.237878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-config-data\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.237937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.242558 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.253486 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-hng8b"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.341711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-config\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342054 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342147 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnv5\" (UniqueName: \"kubernetes.io/projected/a05724d1-f620-4a80-b256-a2d73ab25092-kube-api-access-mqnv5\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342192 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-svc\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct5x5\" (UniqueName: \"kubernetes.io/projected/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-kube-api-access-ct5x5\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342270 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-config-data\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.342644 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.347955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.348852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-config-data\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.362222 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct5x5\" (UniqueName: \"kubernetes.io/projected/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-kube-api-access-ct5x5\") pod \"nova-scheduler-0\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.431766 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.441785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.446214 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqnv5\" (UniqueName: \"kubernetes.io/projected/a05724d1-f620-4a80-b256-a2d73ab25092-kube-api-access-mqnv5\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.446265 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-svc\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.446337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.446378 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.446406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-config\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.447481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.446423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.447560 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.447615 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-config\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.447643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-svc\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.447607 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.462952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqnv5\" (UniqueName: \"kubernetes.io/projected/a05724d1-f620-4a80-b256-a2d73ab25092-kube-api-access-mqnv5\") pod \"dnsmasq-dns-bccf8f775-hng8b\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.553700 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.581880 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.746427 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v4wwj"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.864037 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.895274 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9msl5"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.896785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.901019 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.901224 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.919415 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9msl5"] Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.957538 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.957955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthjg\" (UniqueName: \"kubernetes.io/projected/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-kube-api-access-qthjg\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.958038 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-scripts\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:06 crc kubenswrapper[4832]: I0312 15:09:06.958126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-config-data\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:06.999733 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:07 crc kubenswrapper[4832]: W0312 15:09:07.005559 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32824c63_ac4d_44cc_a22e_a8be0c6ae3d2.slice/crio-963500f6491cb0ddfb4a6ce2f730204a3a7a1d21fef426ad5e3c3ca3035977e1 WatchSource:0}: Error finding container 963500f6491cb0ddfb4a6ce2f730204a3a7a1d21fef426ad5e3c3ca3035977e1: Status 404 returned error can't find the container with id 963500f6491cb0ddfb4a6ce2f730204a3a7a1d21fef426ad5e3c3ca3035977e1 Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.054956 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.067745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-config-data\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.068655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.068708 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthjg\" (UniqueName: \"kubernetes.io/projected/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-kube-api-access-qthjg\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.068823 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-scripts\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.074993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.077889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-config-data\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.083787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-scripts\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.086247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthjg\" (UniqueName: \"kubernetes.io/projected/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-kube-api-access-qthjg\") pod \"nova-cell1-conductor-db-sync-9msl5\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.177373 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-hng8b"] Mar 12 15:09:07 crc kubenswrapper[4832]: W0312 15:09:07.181912 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05724d1_f620_4a80_b256_a2d73ab25092.slice/crio-f8059f459617667015a9d7dbe756476ecdba96cb830ab7205b02584259f825a4 WatchSource:0}: Error finding container f8059f459617667015a9d7dbe756476ecdba96cb830ab7205b02584259f825a4: Status 404 returned error can't find the container with id f8059f459617667015a9d7dbe756476ecdba96cb830ab7205b02584259f825a4 Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.185755 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.268160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.615113 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2","Type":"ContainerStarted","Data":"963500f6491cb0ddfb4a6ce2f730204a3a7a1d21fef426ad5e3c3ca3035977e1"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.618573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6108009e-5c71-4abd-8f54-1eaad2158264","Type":"ContainerStarted","Data":"b072ab51a3322ebdae3242fca5e78510e52c872bfacae4636553d34965749df3"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.620929 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ded7d78-c253-41e0-a2a3-5ce289c98a72","Type":"ContainerStarted","Data":"f80001ad94d7fa0e862a82dc7f5850d35c10894ac57e339c82d4dee1c31d4640"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.622539 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v4wwj" event={"ID":"b521e10b-cf39-49fe-9078-bdc3e8f87a5d","Type":"ContainerStarted","Data":"8226049b28f79ac6c9138c7fa3d3159af15506b7d3452f3276893082570b29b5"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.622565 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v4wwj" event={"ID":"b521e10b-cf39-49fe-9078-bdc3e8f87a5d","Type":"ContainerStarted","Data":"5b5a66ab9ae310508473f565040a172bb9ce6e4ac5ceb6e06f7d4c4f9a08be60"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.624681 4832 generic.go:334] "Generic (PLEG): container finished" podID="a05724d1-f620-4a80-b256-a2d73ab25092" containerID="45cfba313928b4976b854b46d34a1acb710937c63ed46a602bbfed7ef629df6e" exitCode=0 Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.625232 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" event={"ID":"a05724d1-f620-4a80-b256-a2d73ab25092","Type":"ContainerDied","Data":"45cfba313928b4976b854b46d34a1acb710937c63ed46a602bbfed7ef629df6e"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.625256 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" event={"ID":"a05724d1-f620-4a80-b256-a2d73ab25092","Type":"ContainerStarted","Data":"f8059f459617667015a9d7dbe756476ecdba96cb830ab7205b02584259f825a4"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.629780 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28a22ae2-4296-4673-b4eb-2ffd7065ac0a","Type":"ContainerStarted","Data":"88bd6d426d9e8d0ba206fb90361963ff1fa90c06dbdca3a41ae6a783dd8d6313"} Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.646273 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-v4wwj" podStartSLOduration=2.6462544120000002 podStartE2EDuration="2.646254412s" podCreationTimestamp="2026-03-12 15:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:07.642879265 +0000 UTC m=+1306.286893501" watchObservedRunningTime="2026-03-12 15:09:07.646254412 +0000 UTC m=+1306.290268638" Mar 12 15:09:07 crc kubenswrapper[4832]: I0312 15:09:07.801988 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9msl5"] Mar 12 15:09:08 crc kubenswrapper[4832]: I0312 15:09:08.639524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9msl5" event={"ID":"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4","Type":"ContainerStarted","Data":"169c415db7f0ad30b7fdc288c6dcf7e820d7deb0aee5b2dcfc21821fa0bbc1c4"} Mar 12 15:09:08 crc kubenswrapper[4832]: I0312 15:09:08.639831 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9msl5" event={"ID":"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4","Type":"ContainerStarted","Data":"d9bfde67c309b07714599bc209ca28b1632672a46914f988a0f492f57ec02d8c"} Mar 12 15:09:08 crc kubenswrapper[4832]: I0312 15:09:08.642477 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" event={"ID":"a05724d1-f620-4a80-b256-a2d73ab25092","Type":"ContainerStarted","Data":"e153ec789ee468252210400e299d88926950358d2e30db878679eb33c4ab5bfd"} Mar 12 15:09:08 crc kubenswrapper[4832]: I0312 15:09:08.642553 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:08 crc kubenswrapper[4832]: I0312 15:09:08.684870 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9msl5" podStartSLOduration=2.684848279 podStartE2EDuration="2.684848279s" podCreationTimestamp="2026-03-12 15:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:08.667212512 +0000 UTC m=+1307.311226778" watchObservedRunningTime="2026-03-12 15:09:08.684848279 +0000 UTC m=+1307.328862515" Mar 12 15:09:08 crc kubenswrapper[4832]: I0312 15:09:08.692854 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" podStartSLOduration=2.692834358 podStartE2EDuration="2.692834358s" podCreationTimestamp="2026-03-12 15:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:08.687741532 +0000 UTC m=+1307.331755768" watchObservedRunningTime="2026-03-12 15:09:08.692834358 +0000 UTC m=+1307.336848584" Mar 12 15:09:09 crc kubenswrapper[4832]: I0312 15:09:09.971317 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:10 crc kubenswrapper[4832]: I0312 15:09:10.000815 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.412609 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6108009e-5c71-4abd-8f54-1eaad2158264","Type":"ContainerStarted","Data":"81a62355b0c8fcc39c572b4588dfc2e81e705b8728d0fd9a4a509eb1fbee5beb"} Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.413251 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-log" containerID="cri-o://81a62355b0c8fcc39c572b4588dfc2e81e705b8728d0fd9a4a509eb1fbee5beb" gracePeriod=30 Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.413767 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-metadata" containerID="cri-o://f599e51cd1b0eee74251330c819e8c9a673b9d3b9ae501bedf6749891d02ae3a" gracePeriod=30 Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.424142 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ded7d78-c253-41e0-a2a3-5ce289c98a72","Type":"ContainerStarted","Data":"cbdfcce6e7c4ee5f88ff3159ee7c9808c05e702cf79858ba41ac125fec212e64"} Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.434719 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.434770 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.440195 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28a22ae2-4296-4673-b4eb-2ffd7065ac0a","Type":"ContainerStarted","Data":"629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e"} Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.444063 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.552043002 podStartE2EDuration="6.444042715s" podCreationTimestamp="2026-03-12 15:09:05 +0000 UTC" firstStartedPulling="2026-03-12 15:09:07.065571593 +0000 UTC m=+1305.709585819" lastFinishedPulling="2026-03-12 15:09:09.957571296 +0000 UTC m=+1308.601585532" observedRunningTime="2026-03-12 15:09:11.439587907 +0000 UTC m=+1310.083602133" watchObservedRunningTime="2026-03-12 15:09:11.444042715 +0000 UTC m=+1310.088056931" Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.453032 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2","Type":"ContainerStarted","Data":"15547e582e9806db9422224d4f63bb6653271646c8776a2017fdd56cc6c4d73c"} Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.453184 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://15547e582e9806db9422224d4f63bb6653271646c8776a2017fdd56cc6c4d73c" gracePeriod=30 Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.462670 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.697794241 podStartE2EDuration="5.462627829s" podCreationTimestamp="2026-03-12 15:09:06 +0000 UTC" firstStartedPulling="2026-03-12 15:09:07.195632771 +0000 UTC m=+1305.839646997" lastFinishedPulling="2026-03-12 15:09:09.960466359 +0000 UTC m=+1308.604480585" observedRunningTime="2026-03-12 15:09:11.458688076 +0000 UTC m=+1310.102702302" watchObservedRunningTime="2026-03-12 15:09:11.462627829 +0000 UTC m=+1310.106642055" Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.483561 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.5712991560000003 podStartE2EDuration="6.4835398s" podCreationTimestamp="2026-03-12 15:09:05 +0000 UTC" firstStartedPulling="2026-03-12 15:09:07.045351642 +0000 UTC m=+1305.689365868" lastFinishedPulling="2026-03-12 15:09:09.957592276 +0000 UTC m=+1308.601606512" observedRunningTime="2026-03-12 15:09:11.472534284 +0000 UTC m=+1310.116548510" watchObservedRunningTime="2026-03-12 15:09:11.4835398 +0000 UTC m=+1310.127554016" Mar 12 15:09:11 crc kubenswrapper[4832]: I0312 15:09:11.553786 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 15:09:11 crc kubenswrapper[4832]: E0312 15:09:11.649213 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6108009e_5c71_4abd_8f54_1eaad2158264.slice/crio-conmon-81a62355b0c8fcc39c572b4588dfc2e81e705b8728d0fd9a4a509eb1fbee5beb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6108009e_5c71_4abd_8f54_1eaad2158264.slice/crio-81a62355b0c8fcc39c572b4588dfc2e81e705b8728d0fd9a4a509eb1fbee5beb.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:09:12 crc kubenswrapper[4832]: I0312 15:09:12.466910 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ded7d78-c253-41e0-a2a3-5ce289c98a72","Type":"ContainerStarted","Data":"63e4d333d892723079b363f005c5cff5df106cee780c54a0b6af0bb9260adf70"} Mar 12 15:09:12 crc kubenswrapper[4832]: I0312 15:09:12.468572 4832 generic.go:334] "Generic (PLEG): container finished" podID="6108009e-5c71-4abd-8f54-1eaad2158264" containerID="81a62355b0c8fcc39c572b4588dfc2e81e705b8728d0fd9a4a509eb1fbee5beb" exitCode=143 Mar 12 15:09:12 crc kubenswrapper[4832]: I0312 15:09:12.468642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6108009e-5c71-4abd-8f54-1eaad2158264","Type":"ContainerStarted","Data":"f599e51cd1b0eee74251330c819e8c9a673b9d3b9ae501bedf6749891d02ae3a"} Mar 12 15:09:12 crc kubenswrapper[4832]: I0312 15:09:12.468672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6108009e-5c71-4abd-8f54-1eaad2158264","Type":"ContainerDied","Data":"81a62355b0c8fcc39c572b4588dfc2e81e705b8728d0fd9a4a509eb1fbee5beb"} Mar 12 15:09:12 crc kubenswrapper[4832]: I0312 15:09:12.495269 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.407635931 podStartE2EDuration="7.495249315s" podCreationTimestamp="2026-03-12 15:09:05 +0000 UTC" firstStartedPulling="2026-03-12 15:09:06.869468408 +0000 UTC m=+1305.513482634" lastFinishedPulling="2026-03-12 15:09:09.957081792 +0000 UTC m=+1308.601096018" observedRunningTime="2026-03-12 15:09:12.485912517 +0000 UTC m=+1311.129926743" watchObservedRunningTime="2026-03-12 15:09:12.495249315 +0000 UTC m=+1311.139263541" Mar 12 15:09:14 crc kubenswrapper[4832]: I0312 15:09:14.498532 4832 generic.go:334] "Generic (PLEG): container finished" podID="b521e10b-cf39-49fe-9078-bdc3e8f87a5d" containerID="8226049b28f79ac6c9138c7fa3d3159af15506b7d3452f3276893082570b29b5" exitCode=0 Mar 12 15:09:14 crc kubenswrapper[4832]: I0312 15:09:14.498595 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v4wwj" event={"ID":"b521e10b-cf39-49fe-9078-bdc3e8f87a5d","Type":"ContainerDied","Data":"8226049b28f79ac6c9138c7fa3d3159af15506b7d3452f3276893082570b29b5"} Mar 12 15:09:15 crc kubenswrapper[4832]: I0312 15:09:15.510157 4832 generic.go:334] "Generic (PLEG): container finished" podID="07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" containerID="169c415db7f0ad30b7fdc288c6dcf7e820d7deb0aee5b2dcfc21821fa0bbc1c4" exitCode=0 Mar 12 15:09:15 crc kubenswrapper[4832]: I0312 15:09:15.510250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9msl5" event={"ID":"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4","Type":"ContainerDied","Data":"169c415db7f0ad30b7fdc288c6dcf7e820d7deb0aee5b2dcfc21821fa0bbc1c4"} Mar 12 15:09:15 crc kubenswrapper[4832]: I0312 15:09:15.932056 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.060883 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-combined-ca-bundle\") pod \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.061023 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-config-data\") pod \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.061083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-scripts\") pod \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.061201 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqv9z\" (UniqueName: \"kubernetes.io/projected/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-kube-api-access-hqv9z\") pod \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\" (UID: \"b521e10b-cf39-49fe-9078-bdc3e8f87a5d\") " Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.069491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-scripts" (OuterVolumeSpecName: "scripts") pod "b521e10b-cf39-49fe-9078-bdc3e8f87a5d" (UID: "b521e10b-cf39-49fe-9078-bdc3e8f87a5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.074405 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-kube-api-access-hqv9z" (OuterVolumeSpecName: "kube-api-access-hqv9z") pod "b521e10b-cf39-49fe-9078-bdc3e8f87a5d" (UID: "b521e10b-cf39-49fe-9078-bdc3e8f87a5d"). InnerVolumeSpecName "kube-api-access-hqv9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.104719 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-config-data" (OuterVolumeSpecName: "config-data") pod "b521e10b-cf39-49fe-9078-bdc3e8f87a5d" (UID: "b521e10b-cf39-49fe-9078-bdc3e8f87a5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.116533 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b521e10b-cf39-49fe-9078-bdc3e8f87a5d" (UID: "b521e10b-cf39-49fe-9078-bdc3e8f87a5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.135353 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.135412 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.164280 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.164348 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqv9z\" (UniqueName: \"kubernetes.io/projected/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-kube-api-access-hqv9z\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.164370 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.164382 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e10b-cf39-49fe-9078-bdc3e8f87a5d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.443185 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.520888 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v4wwj" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.520887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v4wwj" event={"ID":"b521e10b-cf39-49fe-9078-bdc3e8f87a5d","Type":"ContainerDied","Data":"5b5a66ab9ae310508473f565040a172bb9ce6e4ac5ceb6e06f7d4c4f9a08be60"} Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.520962 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b5a66ab9ae310508473f565040a172bb9ce6e4ac5ceb6e06f7d4c4f9a08be60" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.554185 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.583713 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.598225 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.682398 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zj8ll"] Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.682612 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" podUID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerName="dnsmasq-dns" containerID="cri-o://68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5" gracePeriod=10 Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.805375 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.805744 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-log" containerID="cri-o://cbdfcce6e7c4ee5f88ff3159ee7c9808c05e702cf79858ba41ac125fec212e64" gracePeriod=30 Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.805905 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-api" containerID="cri-o://63e4d333d892723079b363f005c5cff5df106cee780c54a0b6af0bb9260adf70" gracePeriod=30 Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.823977 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.824309 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": EOF" Mar 12 15:09:16 crc kubenswrapper[4832]: I0312 15:09:16.824342 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": EOF" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.022791 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.190930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-config-data\") pod \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.190993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qthjg\" (UniqueName: \"kubernetes.io/projected/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-kube-api-access-qthjg\") pod \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.191065 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-combined-ca-bundle\") pod \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.191146 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-scripts\") pod \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\" (UID: \"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.197277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-kube-api-access-qthjg" (OuterVolumeSpecName: "kube-api-access-qthjg") pod "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" (UID: "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4"). InnerVolumeSpecName "kube-api-access-qthjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.200031 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-scripts" (OuterVolumeSpecName: "scripts") pod "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" (UID: "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.219682 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-config-data" (OuterVolumeSpecName: "config-data") pod "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" (UID: "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.223494 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" (UID: "07b08aa8-c1bf-431c-b101-3c8ece4cd2d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.286842 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.293330 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.293373 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qthjg\" (UniqueName: \"kubernetes.io/projected/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-kube-api-access-qthjg\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.293387 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.293400 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.394846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-svc\") pod \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.394989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-nb\") pod \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.395068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-swift-storage-0\") pod \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.395107 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlhnb\" (UniqueName: \"kubernetes.io/projected/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-kube-api-access-qlhnb\") pod \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.395190 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-sb\") pod \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.395227 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-config\") pod \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\" (UID: \"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36\") " Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.399172 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-kube-api-access-qlhnb" (OuterVolumeSpecName: "kube-api-access-qlhnb") pod "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" (UID: "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36"). InnerVolumeSpecName "kube-api-access-qlhnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.444206 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" (UID: "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.454492 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" (UID: "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.456131 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-config" (OuterVolumeSpecName: "config") pod "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" (UID: "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.466097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" (UID: "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.477161 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" (UID: "1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.499638 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.499673 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.499682 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.499708 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.499718 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.499728 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlhnb\" (UniqueName: \"kubernetes.io/projected/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36-kube-api-access-qlhnb\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.558229 4832 generic.go:334] "Generic (PLEG): container finished" podID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerID="68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5" exitCode=0 Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.558303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" event={"ID":"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36","Type":"ContainerDied","Data":"68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5"} Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.558330 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" event={"ID":"1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36","Type":"ContainerDied","Data":"c76fcc295c7a4e73d2bf5cce2b691a52a540f5777cb461c25d03a577ac53e27d"} Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.558346 4832 scope.go:117] "RemoveContainer" containerID="68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.558475 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zj8ll" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.562436 4832 generic.go:334] "Generic (PLEG): container finished" podID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerID="cbdfcce6e7c4ee5f88ff3159ee7c9808c05e702cf79858ba41ac125fec212e64" exitCode=143 Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.562530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ded7d78-c253-41e0-a2a3-5ce289c98a72","Type":"ContainerDied","Data":"cbdfcce6e7c4ee5f88ff3159ee7c9808c05e702cf79858ba41ac125fec212e64"} Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.565222 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9msl5" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.565423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9msl5" event={"ID":"07b08aa8-c1bf-431c-b101-3c8ece4cd2d4","Type":"ContainerDied","Data":"d9bfde67c309b07714599bc209ca28b1632672a46914f988a0f492f57ec02d8c"} Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.565728 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9bfde67c309b07714599bc209ca28b1632672a46914f988a0f492f57ec02d8c" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.618644 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zj8ll"] Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.618977 4832 scope.go:117] "RemoveContainer" containerID="6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.619891 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.627281 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zj8ll"] Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.635394 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 15:09:17 crc kubenswrapper[4832]: E0312 15:09:17.635774 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" containerName="nova-cell1-conductor-db-sync" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.635791 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" containerName="nova-cell1-conductor-db-sync" Mar 12 15:09:17 crc kubenswrapper[4832]: E0312 15:09:17.635800 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerName="init" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.635806 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerName="init" Mar 12 15:09:17 crc kubenswrapper[4832]: E0312 15:09:17.635823 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerName="dnsmasq-dns" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.635830 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerName="dnsmasq-dns" Mar 12 15:09:17 crc kubenswrapper[4832]: E0312 15:09:17.635841 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b521e10b-cf39-49fe-9078-bdc3e8f87a5d" containerName="nova-manage" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.635849 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b521e10b-cf39-49fe-9078-bdc3e8f87a5d" containerName="nova-manage" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.636040 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" containerName="nova-cell1-conductor-db-sync" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.636057 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" containerName="dnsmasq-dns" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.636069 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b521e10b-cf39-49fe-9078-bdc3e8f87a5d" containerName="nova-manage" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.636688 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.640791 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.653710 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.661876 4832 scope.go:117] "RemoveContainer" containerID="68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5" Mar 12 15:09:17 crc kubenswrapper[4832]: E0312 15:09:17.663991 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5\": container with ID starting with 68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5 not found: ID does not exist" containerID="68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.664031 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5"} err="failed to get container status \"68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5\": rpc error: code = NotFound desc = could not find container \"68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5\": container with ID starting with 68d558f2d8c0938028e178779ec5b4f22badff2be296a76c5c97769d6d66ecf5 not found: ID does not exist" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.664057 4832 scope.go:117] "RemoveContainer" containerID="6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f" Mar 12 15:09:17 crc kubenswrapper[4832]: E0312 15:09:17.668007 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f\": container with ID starting with 6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f not found: ID does not exist" containerID="6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.668039 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f"} err="failed to get container status \"6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f\": rpc error: code = NotFound desc = could not find container \"6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f\": container with ID starting with 6b229be2ad87a262af29e701dd31acb93e56bc39c16c5c945b273c747058508f not found: ID does not exist" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.806175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmrf\" (UniqueName: \"kubernetes.io/projected/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-kube-api-access-mpmrf\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.806542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.806694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.908909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmrf\" (UniqueName: \"kubernetes.io/projected/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-kube-api-access-mpmrf\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.909012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.909043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.913031 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.913674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.927196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmrf\" (UniqueName: \"kubernetes.io/projected/d3cc4ebc-99b9-474c-aef6-c527ce1ed24e-kube-api-access-mpmrf\") pod \"nova-cell1-conductor-0\" (UID: \"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e\") " pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:17 crc kubenswrapper[4832]: I0312 15:09:17.961795 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:18 crc kubenswrapper[4832]: I0312 15:09:18.412818 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 15:09:18 crc kubenswrapper[4832]: I0312 15:09:18.580104 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e","Type":"ContainerStarted","Data":"0279ac53307888fb19e2fbebc54ee38b7146c182eb47a438ae789e2bdde86552"} Mar 12 15:09:18 crc kubenswrapper[4832]: I0312 15:09:18.581428 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="28a22ae2-4296-4673-b4eb-2ffd7065ac0a" containerName="nova-scheduler-scheduler" containerID="cri-o://629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e" gracePeriod=30 Mar 12 15:09:18 crc kubenswrapper[4832]: I0312 15:09:18.630597 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36" path="/var/lib/kubelet/pods/1cf10ffb-9ad3-4d4a-8e0f-18510c7b1c36/volumes" Mar 12 15:09:19 crc kubenswrapper[4832]: I0312 15:09:19.591435 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3cc4ebc-99b9-474c-aef6-c527ce1ed24e","Type":"ContainerStarted","Data":"1a41ced9cbf62d3364ce73fe191182d3f4ac428e8279dc5767b242d6d3bba019"} Mar 12 15:09:19 crc kubenswrapper[4832]: I0312 15:09:19.592437 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:21 crc kubenswrapper[4832]: E0312 15:09:21.557928 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 15:09:21 crc kubenswrapper[4832]: E0312 15:09:21.560302 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 15:09:21 crc kubenswrapper[4832]: E0312 15:09:21.561874 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 15:09:21 crc kubenswrapper[4832]: E0312 15:09:21.561975 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="28a22ae2-4296-4673-b4eb-2ffd7065ac0a" containerName="nova-scheduler-scheduler" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.648273 4832 generic.go:334] "Generic (PLEG): container finished" podID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerID="63e4d333d892723079b363f005c5cff5df106cee780c54a0b6af0bb9260adf70" exitCode=0 Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.648611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ded7d78-c253-41e0-a2a3-5ce289c98a72","Type":"ContainerDied","Data":"63e4d333d892723079b363f005c5cff5df106cee780c54a0b6af0bb9260adf70"} Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.649069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ded7d78-c253-41e0-a2a3-5ce289c98a72","Type":"ContainerDied","Data":"f80001ad94d7fa0e862a82dc7f5850d35c10894ac57e339c82d4dee1c31d4640"} Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.649112 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f80001ad94d7fa0e862a82dc7f5850d35c10894ac57e339c82d4dee1c31d4640" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.650799 4832 generic.go:334] "Generic (PLEG): container finished" podID="28a22ae2-4296-4673-b4eb-2ffd7065ac0a" containerID="629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e" exitCode=0 Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.650826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28a22ae2-4296-4673-b4eb-2ffd7065ac0a","Type":"ContainerDied","Data":"629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e"} Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.754315 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.778166 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=5.778148412 podStartE2EDuration="5.778148412s" podCreationTimestamp="2026-03-12 15:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:19.616756068 +0000 UTC m=+1318.260770314" watchObservedRunningTime="2026-03-12 15:09:22.778148412 +0000 UTC m=+1321.422162648" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.815797 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-combined-ca-bundle\") pod \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.815850 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ded7d78-c253-41e0-a2a3-5ce289c98a72-logs\") pod \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.816421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs4q6\" (UniqueName: \"kubernetes.io/projected/6ded7d78-c253-41e0-a2a3-5ce289c98a72-kube-api-access-bs4q6\") pod \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.816603 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-config-data\") pod \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\" (UID: \"6ded7d78-c253-41e0-a2a3-5ce289c98a72\") " Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.817244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ded7d78-c253-41e0-a2a3-5ce289c98a72-logs" (OuterVolumeSpecName: "logs") pod "6ded7d78-c253-41e0-a2a3-5ce289c98a72" (UID: "6ded7d78-c253-41e0-a2a3-5ce289c98a72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.824057 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ded7d78-c253-41e0-a2a3-5ce289c98a72-kube-api-access-bs4q6" (OuterVolumeSpecName: "kube-api-access-bs4q6") pod "6ded7d78-c253-41e0-a2a3-5ce289c98a72" (UID: "6ded7d78-c253-41e0-a2a3-5ce289c98a72"). InnerVolumeSpecName "kube-api-access-bs4q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.854700 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-config-data" (OuterVolumeSpecName: "config-data") pod "6ded7d78-c253-41e0-a2a3-5ce289c98a72" (UID: "6ded7d78-c253-41e0-a2a3-5ce289c98a72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.859035 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ded7d78-c253-41e0-a2a3-5ce289c98a72" (UID: "6ded7d78-c253-41e0-a2a3-5ce289c98a72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.918343 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs4q6\" (UniqueName: \"kubernetes.io/projected/6ded7d78-c253-41e0-a2a3-5ce289c98a72-kube-api-access-bs4q6\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.918370 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.918381 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ded7d78-c253-41e0-a2a3-5ce289c98a72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.918391 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ded7d78-c253-41e0-a2a3-5ce289c98a72-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:22 crc kubenswrapper[4832]: I0312 15:09:22.925441 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.020097 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-combined-ca-bundle\") pod \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.020314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct5x5\" (UniqueName: \"kubernetes.io/projected/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-kube-api-access-ct5x5\") pod \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.020397 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-config-data\") pod \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\" (UID: \"28a22ae2-4296-4673-b4eb-2ffd7065ac0a\") " Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.024939 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-kube-api-access-ct5x5" (OuterVolumeSpecName: "kube-api-access-ct5x5") pod "28a22ae2-4296-4673-b4eb-2ffd7065ac0a" (UID: "28a22ae2-4296-4673-b4eb-2ffd7065ac0a"). InnerVolumeSpecName "kube-api-access-ct5x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.044805 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28a22ae2-4296-4673-b4eb-2ffd7065ac0a" (UID: "28a22ae2-4296-4673-b4eb-2ffd7065ac0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.053576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-config-data" (OuterVolumeSpecName: "config-data") pod "28a22ae2-4296-4673-b4eb-2ffd7065ac0a" (UID: "28a22ae2-4296-4673-b4eb-2ffd7065ac0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.123603 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.123641 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.123655 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct5x5\" (UniqueName: \"kubernetes.io/projected/28a22ae2-4296-4673-b4eb-2ffd7065ac0a-kube-api-access-ct5x5\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.662118 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.662139 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.662121 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28a22ae2-4296-4673-b4eb-2ffd7065ac0a","Type":"ContainerDied","Data":"88bd6d426d9e8d0ba206fb90361963ff1fa90c06dbdca3a41ae6a783dd8d6313"} Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.662279 4832 scope.go:117] "RemoveContainer" containerID="629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.723769 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.757352 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.780910 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.793178 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.804449 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: E0312 15:09:23.805037 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-log" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.805071 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-log" Mar 12 15:09:23 crc kubenswrapper[4832]: E0312 15:09:23.805125 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-api" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.805137 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-api" Mar 12 15:09:23 crc kubenswrapper[4832]: E0312 15:09:23.805162 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a22ae2-4296-4673-b4eb-2ffd7065ac0a" containerName="nova-scheduler-scheduler" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.805173 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a22ae2-4296-4673-b4eb-2ffd7065ac0a" containerName="nova-scheduler-scheduler" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.805416 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-api" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.805444 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" containerName="nova-api-log" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.805459 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a22ae2-4296-4673-b4eb-2ffd7065ac0a" containerName="nova-scheduler-scheduler" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.806675 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.808651 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.818053 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.820979 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.824801 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.831694 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.842886 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.946102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-config-data\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.946173 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-config-data\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.946356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.946438 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4bzd\" (UniqueName: \"kubernetes.io/projected/d1af9461-d1f9-4531-92a3-7c8904519749-kube-api-access-w4bzd\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.946769 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4v5g\" (UniqueName: \"kubernetes.io/projected/368ae710-da6a-4c6e-8146-1ed639743e78-kube-api-access-c4v5g\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.946873 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:23 crc kubenswrapper[4832]: I0312 15:09:23.946942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368ae710-da6a-4c6e-8146-1ed639743e78-logs\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.049283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.049431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4bzd\" (UniqueName: \"kubernetes.io/projected/d1af9461-d1f9-4531-92a3-7c8904519749-kube-api-access-w4bzd\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.049486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4v5g\" (UniqueName: \"kubernetes.io/projected/368ae710-da6a-4c6e-8146-1ed639743e78-kube-api-access-c4v5g\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.049554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.049592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368ae710-da6a-4c6e-8146-1ed639743e78-logs\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.049770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-config-data\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.050412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368ae710-da6a-4c6e-8146-1ed639743e78-logs\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.050648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-config-data\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.053464 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.054610 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-config-data\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.063191 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-config-data\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.069137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.072394 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4bzd\" (UniqueName: \"kubernetes.io/projected/d1af9461-d1f9-4531-92a3-7c8904519749-kube-api-access-w4bzd\") pod \"nova-scheduler-0\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " pod="openstack/nova-scheduler-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.077210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4v5g\" (UniqueName: \"kubernetes.io/projected/368ae710-da6a-4c6e-8146-1ed639743e78-kube-api-access-c4v5g\") pod \"nova-api-0\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.126067 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.140705 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.605662 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.637100 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a22ae2-4296-4673-b4eb-2ffd7065ac0a" path="/var/lib/kubelet/pods/28a22ae2-4296-4673-b4eb-2ffd7065ac0a/volumes" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.638550 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ded7d78-c253-41e0-a2a3-5ce289c98a72" path="/var/lib/kubelet/pods/6ded7d78-c253-41e0-a2a3-5ce289c98a72/volumes" Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.668879 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:09:24 crc kubenswrapper[4832]: I0312 15:09:24.678195 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"368ae710-da6a-4c6e-8146-1ed639743e78","Type":"ContainerStarted","Data":"124d8ff62f9c1757269e1e06240ed898ac3cc5d04abfce69b2f8d62604f41891"} Mar 12 15:09:25 crc kubenswrapper[4832]: I0312 15:09:25.692248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1af9461-d1f9-4531-92a3-7c8904519749","Type":"ContainerStarted","Data":"63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4"} Mar 12 15:09:25 crc kubenswrapper[4832]: I0312 15:09:25.692559 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1af9461-d1f9-4531-92a3-7c8904519749","Type":"ContainerStarted","Data":"b2798235ca7335d84a59be28a9bfb55c2fe17ac5d7a509ca50cd7abb242a28c2"} Mar 12 15:09:25 crc kubenswrapper[4832]: I0312 15:09:25.694488 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"368ae710-da6a-4c6e-8146-1ed639743e78","Type":"ContainerStarted","Data":"548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a"} Mar 12 15:09:25 crc kubenswrapper[4832]: I0312 15:09:25.694557 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"368ae710-da6a-4c6e-8146-1ed639743e78","Type":"ContainerStarted","Data":"848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f"} Mar 12 15:09:25 crc kubenswrapper[4832]: I0312 15:09:25.715962 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.715938001 podStartE2EDuration="2.715938001s" podCreationTimestamp="2026-03-12 15:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:25.707226771 +0000 UTC m=+1324.351241007" watchObservedRunningTime="2026-03-12 15:09:25.715938001 +0000 UTC m=+1324.359952247" Mar 12 15:09:25 crc kubenswrapper[4832]: I0312 15:09:25.733926 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.733889787 podStartE2EDuration="2.733889787s" podCreationTimestamp="2026-03-12 15:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:25.728202774 +0000 UTC m=+1324.372217010" watchObservedRunningTime="2026-03-12 15:09:25.733889787 +0000 UTC m=+1324.377904063" Mar 12 15:09:28 crc kubenswrapper[4832]: I0312 15:09:28.011813 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 15:09:29 crc kubenswrapper[4832]: I0312 15:09:29.141607 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 15:09:34 crc kubenswrapper[4832]: I0312 15:09:34.126808 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:09:34 crc kubenswrapper[4832]: I0312 15:09:34.127454 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:09:34 crc kubenswrapper[4832]: I0312 15:09:34.141356 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 15:09:34 crc kubenswrapper[4832]: I0312 15:09:34.171385 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 15:09:34 crc kubenswrapper[4832]: I0312 15:09:34.818765 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 15:09:35 crc kubenswrapper[4832]: I0312 15:09:35.209756 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 15:09:35 crc kubenswrapper[4832]: I0312 15:09:35.209842 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 15:09:41 crc kubenswrapper[4832]: E0312 15:09:41.729994 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32824c63_ac4d_44cc_a22e_a8be0c6ae3d2.slice/crio-15547e582e9806db9422224d4f63bb6653271646c8776a2017fdd56cc6c4d73c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a22ae2_4296_4673_b4eb_2ffd7065ac0a.slice/crio-conmon-629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32824c63_ac4d_44cc_a22e_a8be0c6ae3d2.slice/crio-conmon-15547e582e9806db9422224d4f63bb6653271646c8776a2017fdd56cc6c4d73c.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.890747 4832 generic.go:334] "Generic (PLEG): container finished" podID="32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" containerID="15547e582e9806db9422224d4f63bb6653271646c8776a2017fdd56cc6c4d73c" exitCode=137 Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.890821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2","Type":"ContainerDied","Data":"15547e582e9806db9422224d4f63bb6653271646c8776a2017fdd56cc6c4d73c"} Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.891164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2","Type":"ContainerDied","Data":"963500f6491cb0ddfb4a6ce2f730204a3a7a1d21fef426ad5e3c3ca3035977e1"} Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.891189 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963500f6491cb0ddfb4a6ce2f730204a3a7a1d21fef426ad5e3c3ca3035977e1" Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.894826 4832 generic.go:334] "Generic (PLEG): container finished" podID="6108009e-5c71-4abd-8f54-1eaad2158264" containerID="f599e51cd1b0eee74251330c819e8c9a673b9d3b9ae501bedf6749891d02ae3a" exitCode=137 Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.894874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6108009e-5c71-4abd-8f54-1eaad2158264","Type":"ContainerDied","Data":"f599e51cd1b0eee74251330c819e8c9a673b9d3b9ae501bedf6749891d02ae3a"} Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.894901 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6108009e-5c71-4abd-8f54-1eaad2158264","Type":"ContainerDied","Data":"b072ab51a3322ebdae3242fca5e78510e52c872bfacae4636553d34965749df3"} Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.894912 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b072ab51a3322ebdae3242fca5e78510e52c872bfacae4636553d34965749df3" Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.945910 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:09:41 crc kubenswrapper[4832]: I0312 15:09:41.954231 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.122149 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-combined-ca-bundle\") pod \"6108009e-5c71-4abd-8f54-1eaad2158264\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.122206 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fjth\" (UniqueName: \"kubernetes.io/projected/6108009e-5c71-4abd-8f54-1eaad2158264-kube-api-access-5fjth\") pod \"6108009e-5c71-4abd-8f54-1eaad2158264\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.122291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-config-data\") pod \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.122315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-config-data\") pod \"6108009e-5c71-4abd-8f54-1eaad2158264\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.122378 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nxgl\" (UniqueName: \"kubernetes.io/projected/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-kube-api-access-4nxgl\") pod \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.122451 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6108009e-5c71-4abd-8f54-1eaad2158264-logs\") pod \"6108009e-5c71-4abd-8f54-1eaad2158264\" (UID: \"6108009e-5c71-4abd-8f54-1eaad2158264\") " Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.122496 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-combined-ca-bundle\") pod \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\" (UID: \"32824c63-ac4d-44cc-a22e-a8be0c6ae3d2\") " Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.123126 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6108009e-5c71-4abd-8f54-1eaad2158264-logs" (OuterVolumeSpecName: "logs") pod "6108009e-5c71-4abd-8f54-1eaad2158264" (UID: "6108009e-5c71-4abd-8f54-1eaad2158264"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.128908 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6108009e-5c71-4abd-8f54-1eaad2158264-kube-api-access-5fjth" (OuterVolumeSpecName: "kube-api-access-5fjth") pod "6108009e-5c71-4abd-8f54-1eaad2158264" (UID: "6108009e-5c71-4abd-8f54-1eaad2158264"). InnerVolumeSpecName "kube-api-access-5fjth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.135669 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-kube-api-access-4nxgl" (OuterVolumeSpecName: "kube-api-access-4nxgl") pod "32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" (UID: "32824c63-ac4d-44cc-a22e-a8be0c6ae3d2"). InnerVolumeSpecName "kube-api-access-4nxgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.163355 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" (UID: "32824c63-ac4d-44cc-a22e-a8be0c6ae3d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.164638 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-config-data" (OuterVolumeSpecName: "config-data") pod "32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" (UID: "32824c63-ac4d-44cc-a22e-a8be0c6ae3d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.165791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6108009e-5c71-4abd-8f54-1eaad2158264" (UID: "6108009e-5c71-4abd-8f54-1eaad2158264"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.173028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-config-data" (OuterVolumeSpecName: "config-data") pod "6108009e-5c71-4abd-8f54-1eaad2158264" (UID: "6108009e-5c71-4abd-8f54-1eaad2158264"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.225650 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.225707 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fjth\" (UniqueName: \"kubernetes.io/projected/6108009e-5c71-4abd-8f54-1eaad2158264-kube-api-access-5fjth\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.225733 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.225752 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6108009e-5c71-4abd-8f54-1eaad2158264-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.225769 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nxgl\" (UniqueName: \"kubernetes.io/projected/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-kube-api-access-4nxgl\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.225785 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6108009e-5c71-4abd-8f54-1eaad2158264-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.225809 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.905471 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.905638 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.962682 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:42 crc kubenswrapper[4832]: I0312 15:09:42.989689 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.007176 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.042144 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.055781 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: E0312 15:09:43.056263 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-metadata" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.056292 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-metadata" Mar 12 15:09:43 crc kubenswrapper[4832]: E0312 15:09:43.056321 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-log" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.056331 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-log" Mar 12 15:09:43 crc kubenswrapper[4832]: E0312 15:09:43.056346 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.056355 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.056685 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-metadata" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.056715 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" containerName="nova-metadata-log" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.056729 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.059265 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.061388 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.061476 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.067889 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.069912 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.074247 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.074425 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.074587 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.080147 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.092959 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.245660 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.245976 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.246151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.246286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-config-data\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.246439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.246605 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.246726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzt6\" (UniqueName: \"kubernetes.io/projected/61b7e836-b94e-4397-b34f-99bf775d778d-kube-api-access-stzt6\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.246846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b7e836-b94e-4397-b34f-99bf775d778d-logs\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.246991 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.247122 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jln7\" (UniqueName: \"kubernetes.io/projected/ce2dd00e-eeda-4844-a1d6-64391351b678-kube-api-access-9jln7\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349530 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-config-data\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349677 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzt6\" (UniqueName: \"kubernetes.io/projected/61b7e836-b94e-4397-b34f-99bf775d778d-kube-api-access-stzt6\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349809 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b7e836-b94e-4397-b34f-99bf775d778d-logs\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349867 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jln7\" (UniqueName: \"kubernetes.io/projected/ce2dd00e-eeda-4844-a1d6-64391351b678-kube-api-access-9jln7\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349939 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.349963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.350586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b7e836-b94e-4397-b34f-99bf775d778d-logs\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.354067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.355675 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.356238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.356328 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-config-data\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.356539 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.357559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2dd00e-eeda-4844-a1d6-64391351b678-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.357705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.367487 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzt6\" (UniqueName: \"kubernetes.io/projected/61b7e836-b94e-4397-b34f-99bf775d778d-kube-api-access-stzt6\") pod \"nova-metadata-0\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.368970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jln7\" (UniqueName: \"kubernetes.io/projected/ce2dd00e-eeda-4844-a1d6-64391351b678-kube-api-access-9jln7\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce2dd00e-eeda-4844-a1d6-64391351b678\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.385201 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.398343 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.856183 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.924977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce2dd00e-eeda-4844-a1d6-64391351b678","Type":"ContainerStarted","Data":"590d771eae5f71d6750079be83d2a873ff92c6713736e0e03df9207da6a0dbee"} Mar 12 15:09:43 crc kubenswrapper[4832]: I0312 15:09:43.950493 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:09:43 crc kubenswrapper[4832]: W0312 15:09:43.960456 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61b7e836_b94e_4397_b34f_99bf775d778d.slice/crio-b85a2bf81d8b6966e93bebdb6634fa969ff464c33dae21cf4ac6f85dbbcae078 WatchSource:0}: Error finding container b85a2bf81d8b6966e93bebdb6634fa969ff464c33dae21cf4ac6f85dbbcae078: Status 404 returned error can't find the container with id b85a2bf81d8b6966e93bebdb6634fa969ff464c33dae21cf4ac6f85dbbcae078 Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.129729 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.130088 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.130455 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.130489 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.134474 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.134836 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.353147 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l26ps"] Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.360710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.375586 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l26ps"] Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.380343 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.380417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.380491 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58rj\" (UniqueName: \"kubernetes.io/projected/09318380-c905-4904-823c-7d0fa5e1b37c-kube-api-access-z58rj\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.380544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.380587 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-config\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.380625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.481925 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.482210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-config\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.482253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.482307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.482343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.482396 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z58rj\" (UniqueName: \"kubernetes.io/projected/09318380-c905-4904-823c-7d0fa5e1b37c-kube-api-access-z58rj\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.483212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.483316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.483400 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.484126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.484151 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-config\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.502058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z58rj\" (UniqueName: \"kubernetes.io/projected/09318380-c905-4904-823c-7d0fa5e1b37c-kube-api-access-z58rj\") pod \"dnsmasq-dns-cd5cbd7b9-l26ps\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.630463 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32824c63-ac4d-44cc-a22e-a8be0c6ae3d2" path="/var/lib/kubelet/pods/32824c63-ac4d-44cc-a22e-a8be0c6ae3d2/volumes" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.631043 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6108009e-5c71-4abd-8f54-1eaad2158264" path="/var/lib/kubelet/pods/6108009e-5c71-4abd-8f54-1eaad2158264/volumes" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.742812 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.938800 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce2dd00e-eeda-4844-a1d6-64391351b678","Type":"ContainerStarted","Data":"6f50fd31910dfd6848b2f675178a12eec79a3761422e91e000b7065314795957"} Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.954639 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61b7e836-b94e-4397-b34f-99bf775d778d","Type":"ContainerStarted","Data":"58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65"} Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.954705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61b7e836-b94e-4397-b34f-99bf775d778d","Type":"ContainerStarted","Data":"86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4"} Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.954725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61b7e836-b94e-4397-b34f-99bf775d778d","Type":"ContainerStarted","Data":"b85a2bf81d8b6966e93bebdb6634fa969ff464c33dae21cf4ac6f85dbbcae078"} Mar 12 15:09:44 crc kubenswrapper[4832]: I0312 15:09:44.984109 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.984082793 podStartE2EDuration="2.984082793s" podCreationTimestamp="2026-03-12 15:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:44.964470789 +0000 UTC m=+1343.608485015" watchObservedRunningTime="2026-03-12 15:09:44.984082793 +0000 UTC m=+1343.628097019" Mar 12 15:09:45 crc kubenswrapper[4832]: I0312 15:09:45.012075 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.012043026 podStartE2EDuration="3.012043026s" podCreationTimestamp="2026-03-12 15:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:44.986956015 +0000 UTC m=+1343.630970241" watchObservedRunningTime="2026-03-12 15:09:45.012043026 +0000 UTC m=+1343.656057262" Mar 12 15:09:45 crc kubenswrapper[4832]: I0312 15:09:45.328180 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l26ps"] Mar 12 15:09:45 crc kubenswrapper[4832]: I0312 15:09:45.964729 4832 generic.go:334] "Generic (PLEG): container finished" podID="09318380-c905-4904-823c-7d0fa5e1b37c" containerID="bc11ac11f0318e3ac5c8aef39593bad244e0e8edbe62b6faa622d3a9d45af635" exitCode=0 Mar 12 15:09:45 crc kubenswrapper[4832]: I0312 15:09:45.964830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" event={"ID":"09318380-c905-4904-823c-7d0fa5e1b37c","Type":"ContainerDied","Data":"bc11ac11f0318e3ac5c8aef39593bad244e0e8edbe62b6faa622d3a9d45af635"} Mar 12 15:09:45 crc kubenswrapper[4832]: I0312 15:09:45.965084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" event={"ID":"09318380-c905-4904-823c-7d0fa5e1b37c","Type":"ContainerStarted","Data":"71adeb18bf13a6934c0ab5ea37f336249faa56d3f09a292d85924822af6ce6b7"} Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.257240 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.257864 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-central-agent" containerID="cri-o://104579fa01e7c5c9632ba5262ddfed4799c138f2bb4f223243809ffac685d5e3" gracePeriod=30 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.257899 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="proxy-httpd" containerID="cri-o://46b867f589c822dd12887938b76d411d93efdc96759a0db21ef32cc48f28e74f" gracePeriod=30 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.257966 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-notification-agent" containerID="cri-o://0c7af4ef60a9f01361e6aba671d9801a37cb73ee144f7b0a8f5e1b65a78c8c67" gracePeriod=30 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.257979 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="sg-core" containerID="cri-o://83c477df0d99d09ffd7c8068ce81c1c9330ffa48070d942cc437b092560a0945" gracePeriod=30 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.418452 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.982183 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" event={"ID":"09318380-c905-4904-823c-7d0fa5e1b37c","Type":"ContainerStarted","Data":"0f491f667ccc69de62df38373fa447b6b72d301ec1341fbc75a58f91850721f4"} Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.983455 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988424 4832 generic.go:334] "Generic (PLEG): container finished" podID="5497a102-9c08-4111-b686-3e1762c474da" containerID="46b867f589c822dd12887938b76d411d93efdc96759a0db21ef32cc48f28e74f" exitCode=0 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988448 4832 generic.go:334] "Generic (PLEG): container finished" podID="5497a102-9c08-4111-b686-3e1762c474da" containerID="83c477df0d99d09ffd7c8068ce81c1c9330ffa48070d942cc437b092560a0945" exitCode=2 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988455 4832 generic.go:334] "Generic (PLEG): container finished" podID="5497a102-9c08-4111-b686-3e1762c474da" containerID="0c7af4ef60a9f01361e6aba671d9801a37cb73ee144f7b0a8f5e1b65a78c8c67" exitCode=0 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988462 4832 generic.go:334] "Generic (PLEG): container finished" podID="5497a102-9c08-4111-b686-3e1762c474da" containerID="104579fa01e7c5c9632ba5262ddfed4799c138f2bb4f223243809ffac685d5e3" exitCode=0 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988638 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-log" containerID="cri-o://848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f" gracePeriod=30 Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerDied","Data":"46b867f589c822dd12887938b76d411d93efdc96759a0db21ef32cc48f28e74f"} Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerDied","Data":"83c477df0d99d09ffd7c8068ce81c1c9330ffa48070d942cc437b092560a0945"} Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerDied","Data":"0c7af4ef60a9f01361e6aba671d9801a37cb73ee144f7b0a8f5e1b65a78c8c67"} Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerDied","Data":"104579fa01e7c5c9632ba5262ddfed4799c138f2bb4f223243809ffac685d5e3"} Mar 12 15:09:46 crc kubenswrapper[4832]: I0312 15:09:46.988924 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-api" containerID="cri-o://548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a" gracePeriod=30 Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.124704 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140137 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-ceilometer-tls-certs\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-combined-ca-bundle\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140335 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-scripts\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140429 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-sg-core-conf-yaml\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140486 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp6qc\" (UniqueName: \"kubernetes.io/projected/5497a102-9c08-4111-b686-3e1762c474da-kube-api-access-lp6qc\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140525 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-log-httpd\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140547 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-run-httpd\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140580 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-config-data\") pod \"5497a102-9c08-4111-b686-3e1762c474da\" (UID: \"5497a102-9c08-4111-b686-3e1762c474da\") " Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.140962 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.141003 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.141532 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.141551 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5497a102-9c08-4111-b686-3e1762c474da-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.146240 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5497a102-9c08-4111-b686-3e1762c474da-kube-api-access-lp6qc" (OuterVolumeSpecName: "kube-api-access-lp6qc") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "kube-api-access-lp6qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.153263 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-scripts" (OuterVolumeSpecName: "scripts") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.158180 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" podStartSLOduration=3.158160533 podStartE2EDuration="3.158160533s" podCreationTimestamp="2026-03-12 15:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:47.005906278 +0000 UTC m=+1345.649920504" watchObservedRunningTime="2026-03-12 15:09:47.158160533 +0000 UTC m=+1345.802174759" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.181292 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.243779 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.243819 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp6qc\" (UniqueName: \"kubernetes.io/projected/5497a102-9c08-4111-b686-3e1762c474da-kube-api-access-lp6qc\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.243836 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.244408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.258220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.291079 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-config-data" (OuterVolumeSpecName: "config-data") pod "5497a102-9c08-4111-b686-3e1762c474da" (UID: "5497a102-9c08-4111-b686-3e1762c474da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.346288 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.346331 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.346348 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5497a102-9c08-4111-b686-3e1762c474da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.999675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5497a102-9c08-4111-b686-3e1762c474da","Type":"ContainerDied","Data":"8257e202f308566ddad0987331b895a3455c81674655799bfab8de74a3a1dcb7"} Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.999699 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:47 crc kubenswrapper[4832]: I0312 15:09:47.999732 4832 scope.go:117] "RemoveContainer" containerID="46b867f589c822dd12887938b76d411d93efdc96759a0db21ef32cc48f28e74f" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.002926 4832 generic.go:334] "Generic (PLEG): container finished" podID="368ae710-da6a-4c6e-8146-1ed639743e78" containerID="848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f" exitCode=143 Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.003011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"368ae710-da6a-4c6e-8146-1ed639743e78","Type":"ContainerDied","Data":"848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f"} Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.021662 4832 scope.go:117] "RemoveContainer" containerID="83c477df0d99d09ffd7c8068ce81c1c9330ffa48070d942cc437b092560a0945" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.039848 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.054803 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.056191 4832 scope.go:117] "RemoveContainer" containerID="0c7af4ef60a9f01361e6aba671d9801a37cb73ee144f7b0a8f5e1b65a78c8c67" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.062764 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:48 crc kubenswrapper[4832]: E0312 15:09:48.063116 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="proxy-httpd" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063132 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="proxy-httpd" Mar 12 15:09:48 crc kubenswrapper[4832]: E0312 15:09:48.063147 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-notification-agent" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063154 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-notification-agent" Mar 12 15:09:48 crc kubenswrapper[4832]: E0312 15:09:48.063168 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="sg-core" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063175 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="sg-core" Mar 12 15:09:48 crc kubenswrapper[4832]: E0312 15:09:48.063193 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-central-agent" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063199 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-central-agent" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063372 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-central-agent" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063387 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="ceilometer-notification-agent" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063402 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="sg-core" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.063414 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5497a102-9c08-4111-b686-3e1762c474da" containerName="proxy-httpd" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.065011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.067462 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.067615 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.067779 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.086856 4832 scope.go:117] "RemoveContainer" containerID="104579fa01e7c5c9632ba5262ddfed4799c138f2bb4f223243809ffac685d5e3" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.108848 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161418 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-run-httpd\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161487 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161533 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-config-data\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161608 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161629 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-scripts\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-log-httpd\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.161667 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njc5q\" (UniqueName: \"kubernetes.io/projected/817d774e-f102-4a3f-aee6-541cb9b9119c-kube-api-access-njc5q\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.163207 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:48 crc kubenswrapper[4832]: E0312 15:09:48.164381 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-njc5q log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="817d774e-f102-4a3f-aee6-541cb9b9119c" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.263886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-log-httpd\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.263942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njc5q\" (UniqueName: \"kubernetes.io/projected/817d774e-f102-4a3f-aee6-541cb9b9119c-kube-api-access-njc5q\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-run-httpd\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-config-data\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264241 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-scripts\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-log-httpd\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.264865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-run-httpd\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.274066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.274168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.274265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.275334 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-config-data\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.276058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-scripts\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.287872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njc5q\" (UniqueName: \"kubernetes.io/projected/817d774e-f102-4a3f-aee6-541cb9b9119c-kube-api-access-njc5q\") pod \"ceilometer-0\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " pod="openstack/ceilometer-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.385577 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.386407 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.398956 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:48 crc kubenswrapper[4832]: I0312 15:09:48.629068 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5497a102-9c08-4111-b686-3e1762c474da" path="/var/lib/kubelet/pods/5497a102-9c08-4111-b686-3e1762c474da/volumes" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.015237 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.029588 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079274 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-config-data\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079386 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-scripts\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079437 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njc5q\" (UniqueName: \"kubernetes.io/projected/817d774e-f102-4a3f-aee6-541cb9b9119c-kube-api-access-njc5q\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-log-httpd\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079547 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-sg-core-conf-yaml\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079579 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-combined-ca-bundle\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079626 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-ceilometer-tls-certs\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.079656 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-run-httpd\") pod \"817d774e-f102-4a3f-aee6-541cb9b9119c\" (UID: \"817d774e-f102-4a3f-aee6-541cb9b9119c\") " Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.080340 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.081794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.087764 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817d774e-f102-4a3f-aee6-541cb9b9119c-kube-api-access-njc5q" (OuterVolumeSpecName: "kube-api-access-njc5q") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "kube-api-access-njc5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.089301 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-config-data" (OuterVolumeSpecName: "config-data") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.094606 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-scripts" (OuterVolumeSpecName: "scripts") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.094950 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.095036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.096314 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "817d774e-f102-4a3f-aee6-541cb9b9119c" (UID: "817d774e-f102-4a3f-aee6-541cb9b9119c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.182992 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njc5q\" (UniqueName: \"kubernetes.io/projected/817d774e-f102-4a3f-aee6-541cb9b9119c-kube-api-access-njc5q\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.183044 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.183064 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.183082 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.183098 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.183114 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/817d774e-f102-4a3f-aee6-541cb9b9119c-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.183128 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:49 crc kubenswrapper[4832]: I0312 15:09:49.183144 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/817d774e-f102-4a3f-aee6-541cb9b9119c-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.025453 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.087679 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.098084 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.120162 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.122803 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.130260 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.130530 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.133397 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.143270 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306047 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-run-httpd\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306266 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-config-data\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306352 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-scripts\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306424 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpj6\" (UniqueName: \"kubernetes.io/projected/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-kube-api-access-ffpj6\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.306499 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-log-httpd\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.407696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-config-data\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.407766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.407800 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.407834 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-scripts\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.407871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.407914 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffpj6\" (UniqueName: \"kubernetes.io/projected/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-kube-api-access-ffpj6\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.407940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-log-httpd\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.408010 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-run-httpd\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.408380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-run-httpd\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.410873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-log-httpd\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.415266 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.415273 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.415989 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-config-data\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.416066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.420284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-scripts\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.427935 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffpj6\" (UniqueName: \"kubernetes.io/projected/3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e-kube-api-access-ffpj6\") pod \"ceilometer-0\" (UID: \"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e\") " pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.449932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.565237 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.612196 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368ae710-da6a-4c6e-8146-1ed639743e78-logs\") pod \"368ae710-da6a-4c6e-8146-1ed639743e78\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.612372 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-config-data\") pod \"368ae710-da6a-4c6e-8146-1ed639743e78\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.612461 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4v5g\" (UniqueName: \"kubernetes.io/projected/368ae710-da6a-4c6e-8146-1ed639743e78-kube-api-access-c4v5g\") pod \"368ae710-da6a-4c6e-8146-1ed639743e78\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.612539 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-combined-ca-bundle\") pod \"368ae710-da6a-4c6e-8146-1ed639743e78\" (UID: \"368ae710-da6a-4c6e-8146-1ed639743e78\") " Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.612908 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368ae710-da6a-4c6e-8146-1ed639743e78-logs" (OuterVolumeSpecName: "logs") pod "368ae710-da6a-4c6e-8146-1ed639743e78" (UID: "368ae710-da6a-4c6e-8146-1ed639743e78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.613137 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368ae710-da6a-4c6e-8146-1ed639743e78-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.616894 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368ae710-da6a-4c6e-8146-1ed639743e78-kube-api-access-c4v5g" (OuterVolumeSpecName: "kube-api-access-c4v5g") pod "368ae710-da6a-4c6e-8146-1ed639743e78" (UID: "368ae710-da6a-4c6e-8146-1ed639743e78"). InnerVolumeSpecName "kube-api-access-c4v5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.640665 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817d774e-f102-4a3f-aee6-541cb9b9119c" path="/var/lib/kubelet/pods/817d774e-f102-4a3f-aee6-541cb9b9119c/volumes" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.644959 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "368ae710-da6a-4c6e-8146-1ed639743e78" (UID: "368ae710-da6a-4c6e-8146-1ed639743e78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.671614 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-config-data" (OuterVolumeSpecName: "config-data") pod "368ae710-da6a-4c6e-8146-1ed639743e78" (UID: "368ae710-da6a-4c6e-8146-1ed639743e78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.716833 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.716865 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4v5g\" (UniqueName: \"kubernetes.io/projected/368ae710-da6a-4c6e-8146-1ed639743e78-kube-api-access-c4v5g\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.716876 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae710-da6a-4c6e-8146-1ed639743e78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:50 crc kubenswrapper[4832]: I0312 15:09:50.950128 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 15:09:50 crc kubenswrapper[4832]: W0312 15:09:50.954813 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b0a33bd_1b0b_45b6_8937_d7b9047a2a2e.slice/crio-3a3aac0d9f43d82a76d226a6d0ea78d898363d0d13ca5d5c4fea4eb329ea9f11 WatchSource:0}: Error finding container 3a3aac0d9f43d82a76d226a6d0ea78d898363d0d13ca5d5c4fea4eb329ea9f11: Status 404 returned error can't find the container with id 3a3aac0d9f43d82a76d226a6d0ea78d898363d0d13ca5d5c4fea4eb329ea9f11 Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.034092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e","Type":"ContainerStarted","Data":"3a3aac0d9f43d82a76d226a6d0ea78d898363d0d13ca5d5c4fea4eb329ea9f11"} Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.036645 4832 generic.go:334] "Generic (PLEG): container finished" podID="368ae710-da6a-4c6e-8146-1ed639743e78" containerID="548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a" exitCode=0 Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.036681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"368ae710-da6a-4c6e-8146-1ed639743e78","Type":"ContainerDied","Data":"548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a"} Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.036701 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"368ae710-da6a-4c6e-8146-1ed639743e78","Type":"ContainerDied","Data":"124d8ff62f9c1757269e1e06240ed898ac3cc5d04abfce69b2f8d62604f41891"} Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.036721 4832 scope.go:117] "RemoveContainer" containerID="548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.036844 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.099989 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.100054 4832 scope.go:117] "RemoveContainer" containerID="848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.116744 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.135584 4832 scope.go:117] "RemoveContainer" containerID="548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a" Mar 12 15:09:51 crc kubenswrapper[4832]: E0312 15:09:51.136343 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a\": container with ID starting with 548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a not found: ID does not exist" containerID="548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.136592 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a"} err="failed to get container status \"548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a\": rpc error: code = NotFound desc = could not find container \"548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a\": container with ID starting with 548c3f4641838c31ee2b520670c5372702a2218d32411eb17328f99f1264c69a not found: ID does not exist" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.136651 4832 scope.go:117] "RemoveContainer" containerID="848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f" Mar 12 15:09:51 crc kubenswrapper[4832]: E0312 15:09:51.137027 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f\": container with ID starting with 848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f not found: ID does not exist" containerID="848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.137057 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f"} err="failed to get container status \"848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f\": rpc error: code = NotFound desc = could not find container \"848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f\": container with ID starting with 848ba6424d7559c75a5499a69efc2a06be63b7d3fa1f7360258b391270ab3b3f not found: ID does not exist" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.146074 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:51 crc kubenswrapper[4832]: E0312 15:09:51.146470 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-api" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.146487 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-api" Mar 12 15:09:51 crc kubenswrapper[4832]: E0312 15:09:51.146547 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-log" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.146555 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-log" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.146730 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-api" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.146760 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" containerName="nova-api-log" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.147802 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.149735 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.149828 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.153971 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.158855 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.226011 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.226066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-config-data\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.226112 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9wn\" (UniqueName: \"kubernetes.io/projected/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-kube-api-access-9q9wn\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.226135 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.226166 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-logs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.226305 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.328281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-config-data\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.328375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9wn\" (UniqueName: \"kubernetes.io/projected/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-kube-api-access-9q9wn\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.328416 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.328467 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-logs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.328668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.328859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.329254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-logs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.336646 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.336692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.337012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-config-data\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.337077 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.350785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9wn\" (UniqueName: \"kubernetes.io/projected/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-kube-api-access-9q9wn\") pod \"nova-api-0\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.470218 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:09:51 crc kubenswrapper[4832]: I0312 15:09:51.922568 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:09:51 crc kubenswrapper[4832]: W0312 15:09:51.923142 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e393f0_f9ae_431e_98b7_eb3801fdfd22.slice/crio-dad77f334ff3af7ff077185a3f6a264431c9452fbeba36d8069be31c16355cec WatchSource:0}: Error finding container dad77f334ff3af7ff077185a3f6a264431c9452fbeba36d8069be31c16355cec: Status 404 returned error can't find the container with id dad77f334ff3af7ff077185a3f6a264431c9452fbeba36d8069be31c16355cec Mar 12 15:09:51 crc kubenswrapper[4832]: E0312 15:09:51.963611 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a22ae2_4296_4673_b4eb_2ffd7065ac0a.slice/crio-conmon-629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:09:52 crc kubenswrapper[4832]: I0312 15:09:52.046633 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7e393f0-f9ae-431e-98b7-eb3801fdfd22","Type":"ContainerStarted","Data":"dad77f334ff3af7ff077185a3f6a264431c9452fbeba36d8069be31c16355cec"} Mar 12 15:09:52 crc kubenswrapper[4832]: I0312 15:09:52.050034 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e","Type":"ContainerStarted","Data":"0e46d1fe1d057a2072c4761d9dbca9dd5414170f5daa182809e2cf5ad99e8af6"} Mar 12 15:09:52 crc kubenswrapper[4832]: I0312 15:09:52.634467 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368ae710-da6a-4c6e-8146-1ed639743e78" path="/var/lib/kubelet/pods/368ae710-da6a-4c6e-8146-1ed639743e78/volumes" Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.071014 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e","Type":"ContainerStarted","Data":"67dfdb861466f99f94d160b57c97b4848d85409084cc79014106f0512b4032aa"} Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.074606 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7e393f0-f9ae-431e-98b7-eb3801fdfd22","Type":"ContainerStarted","Data":"385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05"} Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.074753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7e393f0-f9ae-431e-98b7-eb3801fdfd22","Type":"ContainerStarted","Data":"abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49"} Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.091217 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.091199491 podStartE2EDuration="2.091199491s" podCreationTimestamp="2026-03-12 15:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:53.089950415 +0000 UTC m=+1351.733964651" watchObservedRunningTime="2026-03-12 15:09:53.091199491 +0000 UTC m=+1351.735213717" Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.385788 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.386118 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.400346 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:53 crc kubenswrapper[4832]: I0312 15:09:53.429766 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:53 crc kubenswrapper[4832]: E0312 15:09:53.628489 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/ubi9/httpd-24:latest" Mar 12 15:09:53 crc kubenswrapper[4832]: E0312 15:09:53.628697 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/tls.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/tls.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffpj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Mar 12 15:09:53 crc kubenswrapper[4832]: E0312 15:09:53.630079 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway\"" pod="openstack/ceilometer-0" podUID="3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.085398 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e","Type":"ContainerStarted","Data":"f5e482f2bd2d8f33c8c3f4779291dd7f05a04b5cc22e3767668ca2cb3d1bb8da"} Mar 12 15:09:54 crc kubenswrapper[4832]: E0312 15:09:54.087824 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.117772 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.314032 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2qlz2"] Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.315168 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.318153 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.318304 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.327035 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2qlz2"] Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.396111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-config-data\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.396152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-scripts\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.396300 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbq7\" (UniqueName: \"kubernetes.io/projected/79232ba7-6284-417a-9de7-8ba849bbeb7d-kube-api-access-tdbq7\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.396372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.397661 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.397912 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.497977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdbq7\" (UniqueName: \"kubernetes.io/projected/79232ba7-6284-417a-9de7-8ba849bbeb7d-kube-api-access-tdbq7\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.498262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.498459 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-config-data\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.498561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-scripts\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.504847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.506625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-scripts\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.509139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-config-data\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.520543 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdbq7\" (UniqueName: \"kubernetes.io/projected/79232ba7-6284-417a-9de7-8ba849bbeb7d-kube-api-access-tdbq7\") pod \"nova-cell1-cell-mapping-2qlz2\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.635479 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.748666 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.830823 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-hng8b"] Mar 12 15:09:54 crc kubenswrapper[4832]: I0312 15:09:54.831156 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" podUID="a05724d1-f620-4a80-b256-a2d73ab25092" containerName="dnsmasq-dns" containerID="cri-o://e153ec789ee468252210400e299d88926950358d2e30db878679eb33c4ab5bfd" gracePeriod=10 Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.104719 4832 generic.go:334] "Generic (PLEG): container finished" podID="a05724d1-f620-4a80-b256-a2d73ab25092" containerID="e153ec789ee468252210400e299d88926950358d2e30db878679eb33c4ab5bfd" exitCode=0 Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.104876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" event={"ID":"a05724d1-f620-4a80-b256-a2d73ab25092","Type":"ContainerDied","Data":"e153ec789ee468252210400e299d88926950358d2e30db878679eb33c4ab5bfd"} Mar 12 15:09:55 crc kubenswrapper[4832]: E0312 15:09:55.118738 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.131405 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2qlz2"] Mar 12 15:09:55 crc kubenswrapper[4832]: W0312 15:09:55.139145 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79232ba7_6284_417a_9de7_8ba849bbeb7d.slice/crio-e2feab22506521dafa270e9a5f51924d48ec4049bf4ef206ea86812071fd7b16 WatchSource:0}: Error finding container e2feab22506521dafa270e9a5f51924d48ec4049bf4ef206ea86812071fd7b16: Status 404 returned error can't find the container with id e2feab22506521dafa270e9a5f51924d48ec4049bf4ef206ea86812071fd7b16 Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.304177 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.420803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqnv5\" (UniqueName: \"kubernetes.io/projected/a05724d1-f620-4a80-b256-a2d73ab25092-kube-api-access-mqnv5\") pod \"a05724d1-f620-4a80-b256-a2d73ab25092\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.420865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-sb\") pod \"a05724d1-f620-4a80-b256-a2d73ab25092\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.420923 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-swift-storage-0\") pod \"a05724d1-f620-4a80-b256-a2d73ab25092\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.421111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-svc\") pod \"a05724d1-f620-4a80-b256-a2d73ab25092\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.421128 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-nb\") pod \"a05724d1-f620-4a80-b256-a2d73ab25092\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.421151 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-config\") pod \"a05724d1-f620-4a80-b256-a2d73ab25092\" (UID: \"a05724d1-f620-4a80-b256-a2d73ab25092\") " Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.433268 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05724d1-f620-4a80-b256-a2d73ab25092-kube-api-access-mqnv5" (OuterVolumeSpecName: "kube-api-access-mqnv5") pod "a05724d1-f620-4a80-b256-a2d73ab25092" (UID: "a05724d1-f620-4a80-b256-a2d73ab25092"). InnerVolumeSpecName "kube-api-access-mqnv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.487813 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a05724d1-f620-4a80-b256-a2d73ab25092" (UID: "a05724d1-f620-4a80-b256-a2d73ab25092"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.492084 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a05724d1-f620-4a80-b256-a2d73ab25092" (UID: "a05724d1-f620-4a80-b256-a2d73ab25092"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.508007 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a05724d1-f620-4a80-b256-a2d73ab25092" (UID: "a05724d1-f620-4a80-b256-a2d73ab25092"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.512361 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-config" (OuterVolumeSpecName: "config") pod "a05724d1-f620-4a80-b256-a2d73ab25092" (UID: "a05724d1-f620-4a80-b256-a2d73ab25092"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.516846 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a05724d1-f620-4a80-b256-a2d73ab25092" (UID: "a05724d1-f620-4a80-b256-a2d73ab25092"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.523332 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.523356 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.523368 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.523379 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqnv5\" (UniqueName: \"kubernetes.io/projected/a05724d1-f620-4a80-b256-a2d73ab25092-kube-api-access-mqnv5\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.523388 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:55 crc kubenswrapper[4832]: I0312 15:09:55.523397 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a05724d1-f620-4a80-b256-a2d73ab25092-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.115763 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2qlz2" event={"ID":"79232ba7-6284-417a-9de7-8ba849bbeb7d","Type":"ContainerStarted","Data":"4d304cb9b8b919f61719509a8ccb150ff6ee3c9e7daf6e7907cdda441868837d"} Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.116100 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2qlz2" event={"ID":"79232ba7-6284-417a-9de7-8ba849bbeb7d","Type":"ContainerStarted","Data":"e2feab22506521dafa270e9a5f51924d48ec4049bf4ef206ea86812071fd7b16"} Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.118474 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" event={"ID":"a05724d1-f620-4a80-b256-a2d73ab25092","Type":"ContainerDied","Data":"f8059f459617667015a9d7dbe756476ecdba96cb830ab7205b02584259f825a4"} Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.118589 4832 scope.go:117] "RemoveContainer" containerID="e153ec789ee468252210400e299d88926950358d2e30db878679eb33c4ab5bfd" Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.118835 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-hng8b" Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.144540 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2qlz2" podStartSLOduration=2.144487089 podStartE2EDuration="2.144487089s" podCreationTimestamp="2026-03-12 15:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:09:56.143750117 +0000 UTC m=+1354.787764373" watchObservedRunningTime="2026-03-12 15:09:56.144487089 +0000 UTC m=+1354.788501325" Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.150877 4832 scope.go:117] "RemoveContainer" containerID="45cfba313928b4976b854b46d34a1acb710937c63ed46a602bbfed7ef629df6e" Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.182420 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-hng8b"] Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.194133 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-hng8b"] Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.314403 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.314472 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:09:56 crc kubenswrapper[4832]: I0312 15:09:56.638498 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05724d1-f620-4a80-b256-a2d73ab25092" path="/var/lib/kubelet/pods/a05724d1-f620-4a80-b256-a2d73ab25092/volumes" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.133296 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555470-brnp7"] Mar 12 15:10:00 crc kubenswrapper[4832]: E0312 15:10:00.134146 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05724d1-f620-4a80-b256-a2d73ab25092" containerName="dnsmasq-dns" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.134160 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05724d1-f620-4a80-b256-a2d73ab25092" containerName="dnsmasq-dns" Mar 12 15:10:00 crc kubenswrapper[4832]: E0312 15:10:00.134198 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05724d1-f620-4a80-b256-a2d73ab25092" containerName="init" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.134206 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05724d1-f620-4a80-b256-a2d73ab25092" containerName="init" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.134448 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05724d1-f620-4a80-b256-a2d73ab25092" containerName="dnsmasq-dns" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.135223 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-brnp7" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.138330 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.138491 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.139897 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.144216 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-brnp7"] Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.192695 4832 generic.go:334] "Generic (PLEG): container finished" podID="79232ba7-6284-417a-9de7-8ba849bbeb7d" containerID="4d304cb9b8b919f61719509a8ccb150ff6ee3c9e7daf6e7907cdda441868837d" exitCode=0 Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.192749 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2qlz2" event={"ID":"79232ba7-6284-417a-9de7-8ba849bbeb7d","Type":"ContainerDied","Data":"4d304cb9b8b919f61719509a8ccb150ff6ee3c9e7daf6e7907cdda441868837d"} Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.227563 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xq65\" (UniqueName: \"kubernetes.io/projected/1fa1e044-9fc1-4e3b-a815-5a74d4d3913c-kube-api-access-2xq65\") pod \"auto-csr-approver-29555470-brnp7\" (UID: \"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c\") " pod="openshift-infra/auto-csr-approver-29555470-brnp7" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.329497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xq65\" (UniqueName: \"kubernetes.io/projected/1fa1e044-9fc1-4e3b-a815-5a74d4d3913c-kube-api-access-2xq65\") pod \"auto-csr-approver-29555470-brnp7\" (UID: \"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c\") " pod="openshift-infra/auto-csr-approver-29555470-brnp7" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.350031 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xq65\" (UniqueName: \"kubernetes.io/projected/1fa1e044-9fc1-4e3b-a815-5a74d4d3913c-kube-api-access-2xq65\") pod \"auto-csr-approver-29555470-brnp7\" (UID: \"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c\") " pod="openshift-infra/auto-csr-approver-29555470-brnp7" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.456932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-brnp7" Mar 12 15:10:00 crc kubenswrapper[4832]: I0312 15:10:00.925991 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-brnp7"] Mar 12 15:10:00 crc kubenswrapper[4832]: W0312 15:10:00.931197 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fa1e044_9fc1_4e3b_a815_5a74d4d3913c.slice/crio-c347415c8bab152e53ad0e7a22b8781b802a6675c5306d4fc4a00befbe73d878 WatchSource:0}: Error finding container c347415c8bab152e53ad0e7a22b8781b802a6675c5306d4fc4a00befbe73d878: Status 404 returned error can't find the container with id c347415c8bab152e53ad0e7a22b8781b802a6675c5306d4fc4a00befbe73d878 Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.204650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555470-brnp7" event={"ID":"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c","Type":"ContainerStarted","Data":"c347415c8bab152e53ad0e7a22b8781b802a6675c5306d4fc4a00befbe73d878"} Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.470913 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.471231 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.654287 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.763496 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdbq7\" (UniqueName: \"kubernetes.io/projected/79232ba7-6284-417a-9de7-8ba849bbeb7d-kube-api-access-tdbq7\") pod \"79232ba7-6284-417a-9de7-8ba849bbeb7d\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.763709 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-scripts\") pod \"79232ba7-6284-417a-9de7-8ba849bbeb7d\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.763865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-combined-ca-bundle\") pod \"79232ba7-6284-417a-9de7-8ba849bbeb7d\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.763914 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-config-data\") pod \"79232ba7-6284-417a-9de7-8ba849bbeb7d\" (UID: \"79232ba7-6284-417a-9de7-8ba849bbeb7d\") " Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.768675 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-scripts" (OuterVolumeSpecName: "scripts") pod "79232ba7-6284-417a-9de7-8ba849bbeb7d" (UID: "79232ba7-6284-417a-9de7-8ba849bbeb7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.768970 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79232ba7-6284-417a-9de7-8ba849bbeb7d-kube-api-access-tdbq7" (OuterVolumeSpecName: "kube-api-access-tdbq7") pod "79232ba7-6284-417a-9de7-8ba849bbeb7d" (UID: "79232ba7-6284-417a-9de7-8ba849bbeb7d"). InnerVolumeSpecName "kube-api-access-tdbq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.789562 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-config-data" (OuterVolumeSpecName: "config-data") pod "79232ba7-6284-417a-9de7-8ba849bbeb7d" (UID: "79232ba7-6284-417a-9de7-8ba849bbeb7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.820227 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79232ba7-6284-417a-9de7-8ba849bbeb7d" (UID: "79232ba7-6284-417a-9de7-8ba849bbeb7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.866951 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.867166 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.867304 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdbq7\" (UniqueName: \"kubernetes.io/projected/79232ba7-6284-417a-9de7-8ba849bbeb7d-kube-api-access-tdbq7\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:01 crc kubenswrapper[4832]: I0312 15:10:01.867458 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79232ba7-6284-417a-9de7-8ba849bbeb7d-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.221711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2qlz2" event={"ID":"79232ba7-6284-417a-9de7-8ba849bbeb7d","Type":"ContainerDied","Data":"e2feab22506521dafa270e9a5f51924d48ec4049bf4ef206ea86812071fd7b16"} Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.222012 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2feab22506521dafa270e9a5f51924d48ec4049bf4ef206ea86812071fd7b16" Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.221771 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2qlz2" Mar 12 15:10:02 crc kubenswrapper[4832]: E0312 15:10:02.281990 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a22ae2_4296_4673_b4eb_2ffd7065ac0a.slice/crio-conmon-629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.416928 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.417187 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-log" containerID="cri-o://abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49" gracePeriod=30 Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.417696 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-api" containerID="cri-o://385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05" gracePeriod=30 Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.428693 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": EOF" Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.428786 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": EOF" Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.439158 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.439350 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d1af9461-d1f9-4531-92a3-7c8904519749" containerName="nova-scheduler-scheduler" containerID="cri-o://63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" gracePeriod=30 Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.468721 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.468977 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-log" containerID="cri-o://86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4" gracePeriod=30 Mar 12 15:10:02 crc kubenswrapper[4832]: I0312 15:10:02.469440 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-metadata" containerID="cri-o://58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65" gracePeriod=30 Mar 12 15:10:03 crc kubenswrapper[4832]: I0312 15:10:03.252726 4832 generic.go:334] "Generic (PLEG): container finished" podID="61b7e836-b94e-4397-b34f-99bf775d778d" containerID="86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4" exitCode=143 Mar 12 15:10:03 crc kubenswrapper[4832]: I0312 15:10:03.253294 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61b7e836-b94e-4397-b34f-99bf775d778d","Type":"ContainerDied","Data":"86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4"} Mar 12 15:10:03 crc kubenswrapper[4832]: I0312 15:10:03.268295 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fa1e044-9fc1-4e3b-a815-5a74d4d3913c" containerID="b329b4b464e5842d3ebc0d53dbf11c0bf62ea86202ef39aeef2c06cfbc5ce3cf" exitCode=0 Mar 12 15:10:03 crc kubenswrapper[4832]: I0312 15:10:03.268392 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555470-brnp7" event={"ID":"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c","Type":"ContainerDied","Data":"b329b4b464e5842d3ebc0d53dbf11c0bf62ea86202ef39aeef2c06cfbc5ce3cf"} Mar 12 15:10:03 crc kubenswrapper[4832]: I0312 15:10:03.274290 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerID="abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49" exitCode=143 Mar 12 15:10:03 crc kubenswrapper[4832]: I0312 15:10:03.274325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7e393f0-f9ae-431e-98b7-eb3801fdfd22","Type":"ContainerDied","Data":"abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49"} Mar 12 15:10:04 crc kubenswrapper[4832]: E0312 15:10:04.142442 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 15:10:04 crc kubenswrapper[4832]: E0312 15:10:04.143857 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 15:10:04 crc kubenswrapper[4832]: E0312 15:10:04.145358 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 15:10:04 crc kubenswrapper[4832]: E0312 15:10:04.145411 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d1af9461-d1f9-4531-92a3-7c8904519749" containerName="nova-scheduler-scheduler" Mar 12 15:10:04 crc kubenswrapper[4832]: I0312 15:10:04.704049 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-brnp7" Mar 12 15:10:04 crc kubenswrapper[4832]: I0312 15:10:04.831039 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xq65\" (UniqueName: \"kubernetes.io/projected/1fa1e044-9fc1-4e3b-a815-5a74d4d3913c-kube-api-access-2xq65\") pod \"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c\" (UID: \"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c\") " Mar 12 15:10:04 crc kubenswrapper[4832]: I0312 15:10:04.836184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa1e044-9fc1-4e3b-a815-5a74d4d3913c-kube-api-access-2xq65" (OuterVolumeSpecName: "kube-api-access-2xq65") pod "1fa1e044-9fc1-4e3b-a815-5a74d4d3913c" (UID: "1fa1e044-9fc1-4e3b-a815-5a74d4d3913c"). InnerVolumeSpecName "kube-api-access-2xq65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:04 crc kubenswrapper[4832]: I0312 15:10:04.933153 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xq65\" (UniqueName: \"kubernetes.io/projected/1fa1e044-9fc1-4e3b-a815-5a74d4d3913c-kube-api-access-2xq65\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:05 crc kubenswrapper[4832]: I0312 15:10:05.297368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555470-brnp7" event={"ID":"1fa1e044-9fc1-4e3b-a815-5a74d4d3913c","Type":"ContainerDied","Data":"c347415c8bab152e53ad0e7a22b8781b802a6675c5306d4fc4a00befbe73d878"} Mar 12 15:10:05 crc kubenswrapper[4832]: I0312 15:10:05.297691 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c347415c8bab152e53ad0e7a22b8781b802a6675c5306d4fc4a00befbe73d878" Mar 12 15:10:05 crc kubenswrapper[4832]: I0312 15:10:05.297595 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-brnp7" Mar 12 15:10:05 crc kubenswrapper[4832]: I0312 15:10:05.787646 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-drb4s"] Mar 12 15:10:05 crc kubenswrapper[4832]: I0312 15:10:05.797337 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-drb4s"] Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.155667 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.261486 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-nova-metadata-tls-certs\") pod \"61b7e836-b94e-4397-b34f-99bf775d778d\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.261591 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-combined-ca-bundle\") pod \"61b7e836-b94e-4397-b34f-99bf775d778d\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.261811 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-config-data\") pod \"61b7e836-b94e-4397-b34f-99bf775d778d\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.261862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b7e836-b94e-4397-b34f-99bf775d778d-logs\") pod \"61b7e836-b94e-4397-b34f-99bf775d778d\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.261981 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stzt6\" (UniqueName: \"kubernetes.io/projected/61b7e836-b94e-4397-b34f-99bf775d778d-kube-api-access-stzt6\") pod \"61b7e836-b94e-4397-b34f-99bf775d778d\" (UID: \"61b7e836-b94e-4397-b34f-99bf775d778d\") " Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.262717 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b7e836-b94e-4397-b34f-99bf775d778d-logs" (OuterVolumeSpecName: "logs") pod "61b7e836-b94e-4397-b34f-99bf775d778d" (UID: "61b7e836-b94e-4397-b34f-99bf775d778d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.263248 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b7e836-b94e-4397-b34f-99bf775d778d-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.276852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b7e836-b94e-4397-b34f-99bf775d778d-kube-api-access-stzt6" (OuterVolumeSpecName: "kube-api-access-stzt6") pod "61b7e836-b94e-4397-b34f-99bf775d778d" (UID: "61b7e836-b94e-4397-b34f-99bf775d778d"). InnerVolumeSpecName "kube-api-access-stzt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.291706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-config-data" (OuterVolumeSpecName: "config-data") pod "61b7e836-b94e-4397-b34f-99bf775d778d" (UID: "61b7e836-b94e-4397-b34f-99bf775d778d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.306912 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61b7e836-b94e-4397-b34f-99bf775d778d" (UID: "61b7e836-b94e-4397-b34f-99bf775d778d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.310279 4832 generic.go:334] "Generic (PLEG): container finished" podID="61b7e836-b94e-4397-b34f-99bf775d778d" containerID="58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65" exitCode=0 Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.310329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61b7e836-b94e-4397-b34f-99bf775d778d","Type":"ContainerDied","Data":"58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65"} Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.310361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61b7e836-b94e-4397-b34f-99bf775d778d","Type":"ContainerDied","Data":"b85a2bf81d8b6966e93bebdb6634fa969ff464c33dae21cf4ac6f85dbbcae078"} Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.310383 4832 scope.go:117] "RemoveContainer" containerID="58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.310636 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.322959 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "61b7e836-b94e-4397-b34f-99bf775d778d" (UID: "61b7e836-b94e-4397-b34f-99bf775d778d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.366029 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.366108 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.366122 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b7e836-b94e-4397-b34f-99bf775d778d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.366146 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stzt6\" (UniqueName: \"kubernetes.io/projected/61b7e836-b94e-4397-b34f-99bf775d778d-kube-api-access-stzt6\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.395013 4832 scope.go:117] "RemoveContainer" containerID="86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.417657 4832 scope.go:117] "RemoveContainer" containerID="58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65" Mar 12 15:10:06 crc kubenswrapper[4832]: E0312 15:10:06.418000 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65\": container with ID starting with 58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65 not found: ID does not exist" containerID="58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.418033 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65"} err="failed to get container status \"58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65\": rpc error: code = NotFound desc = could not find container \"58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65\": container with ID starting with 58dccd4df51ab52e4dbcb3e6702e5f4c05fa405bd096434c927eed07c29a3c65 not found: ID does not exist" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.418053 4832 scope.go:117] "RemoveContainer" containerID="86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4" Mar 12 15:10:06 crc kubenswrapper[4832]: E0312 15:10:06.418418 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4\": container with ID starting with 86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4 not found: ID does not exist" containerID="86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.418439 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4"} err="failed to get container status \"86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4\": rpc error: code = NotFound desc = could not find container \"86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4\": container with ID starting with 86b80db7f5cc39fb2aa8ee8466eacc34cd26a70cb1d3f07d3a8b14e892831fb4 not found: ID does not exist" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.639582 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd68034-e452-409f-aeb1-121908cb2498" path="/var/lib/kubelet/pods/7bd68034-e452-409f-aeb1-121908cb2498/volumes" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.665348 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.680943 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.693887 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:10:06 crc kubenswrapper[4832]: E0312 15:10:06.694296 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa1e044-9fc1-4e3b-a815-5a74d4d3913c" containerName="oc" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694317 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa1e044-9fc1-4e3b-a815-5a74d4d3913c" containerName="oc" Mar 12 15:10:06 crc kubenswrapper[4832]: E0312 15:10:06.694335 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-log" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694343 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-log" Mar 12 15:10:06 crc kubenswrapper[4832]: E0312 15:10:06.694363 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79232ba7-6284-417a-9de7-8ba849bbeb7d" containerName="nova-manage" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694371 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="79232ba7-6284-417a-9de7-8ba849bbeb7d" containerName="nova-manage" Mar 12 15:10:06 crc kubenswrapper[4832]: E0312 15:10:06.694383 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-metadata" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694389 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-metadata" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694582 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="79232ba7-6284-417a-9de7-8ba849bbeb7d" containerName="nova-manage" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694602 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa1e044-9fc1-4e3b-a815-5a74d4d3913c" containerName="oc" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694612 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-log" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.694623 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" containerName="nova-metadata-metadata" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.695824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.700097 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.700169 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.711351 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.875737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2r9g\" (UniqueName: \"kubernetes.io/projected/864eb0ae-dd5f-438f-81a0-e48bf297eecb-kube-api-access-z2r9g\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.875815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/864eb0ae-dd5f-438f-81a0-e48bf297eecb-logs\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.876269 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.876381 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-config-data\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.876452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.979385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-config-data\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.979531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.979667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2r9g\" (UniqueName: \"kubernetes.io/projected/864eb0ae-dd5f-438f-81a0-e48bf297eecb-kube-api-access-z2r9g\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.979717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/864eb0ae-dd5f-438f-81a0-e48bf297eecb-logs\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.979924 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.980437 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/864eb0ae-dd5f-438f-81a0-e48bf297eecb-logs\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.983928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.983981 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-config-data\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:06 crc kubenswrapper[4832]: I0312 15:10:06.984859 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864eb0ae-dd5f-438f-81a0-e48bf297eecb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:07 crc kubenswrapper[4832]: I0312 15:10:07.000059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2r9g\" (UniqueName: \"kubernetes.io/projected/864eb0ae-dd5f-438f-81a0-e48bf297eecb-kube-api-access-z2r9g\") pod \"nova-metadata-0\" (UID: \"864eb0ae-dd5f-438f-81a0-e48bf297eecb\") " pod="openstack/nova-metadata-0" Mar 12 15:10:07 crc kubenswrapper[4832]: I0312 15:10:07.030902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.354405 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.367286 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.367325 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerID="385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05" exitCode=0 Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.367409 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7e393f0-f9ae-431e-98b7-eb3801fdfd22","Type":"ContainerDied","Data":"385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05"} Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.367446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7e393f0-f9ae-431e-98b7-eb3801fdfd22","Type":"ContainerDied","Data":"dad77f334ff3af7ff077185a3f6a264431c9452fbeba36d8069be31c16355cec"} Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.367473 4832 scope.go:117] "RemoveContainer" containerID="385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.367846 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.375456 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1af9461-d1f9-4531-92a3-7c8904519749" containerID="63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" exitCode=0 Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.375526 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1af9461-d1f9-4531-92a3-7c8904519749","Type":"ContainerDied","Data":"63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4"} Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.375559 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1af9461-d1f9-4531-92a3-7c8904519749","Type":"ContainerDied","Data":"b2798235ca7335d84a59be28a9bfb55c2fe17ac5d7a509ca50cd7abb242a28c2"} Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.375613 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:10:08 crc kubenswrapper[4832]: W0312 15:10:08.385553 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod864eb0ae_dd5f_438f_81a0_e48bf297eecb.slice/crio-801ca67efb862be74426a70b84dedde5b872f0a5bc78d69993e848b99d6184b5 WatchSource:0}: Error finding container 801ca67efb862be74426a70b84dedde5b872f0a5bc78d69993e848b99d6184b5: Status 404 returned error can't find the container with id 801ca67efb862be74426a70b84dedde5b872f0a5bc78d69993e848b99d6184b5 Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.412213 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4bzd\" (UniqueName: \"kubernetes.io/projected/d1af9461-d1f9-4531-92a3-7c8904519749-kube-api-access-w4bzd\") pod \"d1af9461-d1f9-4531-92a3-7c8904519749\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.413770 4832 scope.go:117] "RemoveContainer" containerID="abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.422230 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1af9461-d1f9-4531-92a3-7c8904519749-kube-api-access-w4bzd" (OuterVolumeSpecName: "kube-api-access-w4bzd") pod "d1af9461-d1f9-4531-92a3-7c8904519749" (UID: "d1af9461-d1f9-4531-92a3-7c8904519749"). InnerVolumeSpecName "kube-api-access-w4bzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.449362 4832 scope.go:117] "RemoveContainer" containerID="385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05" Mar 12 15:10:08 crc kubenswrapper[4832]: E0312 15:10:08.449943 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05\": container with ID starting with 385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05 not found: ID does not exist" containerID="385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.450014 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05"} err="failed to get container status \"385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05\": rpc error: code = NotFound desc = could not find container \"385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05\": container with ID starting with 385b3f44cd6f817fd2b2c678f30618d94f7a817411b3e93d20d567e3ce4cff05 not found: ID does not exist" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.450043 4832 scope.go:117] "RemoveContainer" containerID="abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49" Mar 12 15:10:08 crc kubenswrapper[4832]: E0312 15:10:08.450477 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49\": container with ID starting with abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49 not found: ID does not exist" containerID="abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.450514 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49"} err="failed to get container status \"abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49\": rpc error: code = NotFound desc = could not find container \"abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49\": container with ID starting with abcd07a6568c0c4eb4649fbb7b04c9eaecfb913d22434af96f540571342f8f49 not found: ID does not exist" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.450530 4832 scope.go:117] "RemoveContainer" containerID="63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.475360 4832 scope.go:117] "RemoveContainer" containerID="63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" Mar 12 15:10:08 crc kubenswrapper[4832]: E0312 15:10:08.475913 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4\": container with ID starting with 63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4 not found: ID does not exist" containerID="63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.475953 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4"} err="failed to get container status \"63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4\": rpc error: code = NotFound desc = could not find container \"63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4\": container with ID starting with 63f9209b2d9755ccef87972c8ba6fa8923112b871e276016119374f0471d73e4 not found: ID does not exist" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.513809 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-combined-ca-bundle\") pod \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.513857 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-public-tls-certs\") pod \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.513881 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-combined-ca-bundle\") pod \"d1af9461-d1f9-4531-92a3-7c8904519749\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.513902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9wn\" (UniqueName: \"kubernetes.io/projected/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-kube-api-access-9q9wn\") pod \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.513977 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-config-data\") pod \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.513998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-logs\") pod \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.514016 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-config-data\") pod \"d1af9461-d1f9-4531-92a3-7c8904519749\" (UID: \"d1af9461-d1f9-4531-92a3-7c8904519749\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.514043 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-internal-tls-certs\") pod \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\" (UID: \"f7e393f0-f9ae-431e-98b7-eb3801fdfd22\") " Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.514337 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4bzd\" (UniqueName: \"kubernetes.io/projected/d1af9461-d1f9-4531-92a3-7c8904519749-kube-api-access-w4bzd\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.514826 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-logs" (OuterVolumeSpecName: "logs") pod "f7e393f0-f9ae-431e-98b7-eb3801fdfd22" (UID: "f7e393f0-f9ae-431e-98b7-eb3801fdfd22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.519273 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-kube-api-access-9q9wn" (OuterVolumeSpecName: "kube-api-access-9q9wn") pod "f7e393f0-f9ae-431e-98b7-eb3801fdfd22" (UID: "f7e393f0-f9ae-431e-98b7-eb3801fdfd22"). InnerVolumeSpecName "kube-api-access-9q9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.545442 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-config-data" (OuterVolumeSpecName: "config-data") pod "d1af9461-d1f9-4531-92a3-7c8904519749" (UID: "d1af9461-d1f9-4531-92a3-7c8904519749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.559994 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-config-data" (OuterVolumeSpecName: "config-data") pod "f7e393f0-f9ae-431e-98b7-eb3801fdfd22" (UID: "f7e393f0-f9ae-431e-98b7-eb3801fdfd22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.561271 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7e393f0-f9ae-431e-98b7-eb3801fdfd22" (UID: "f7e393f0-f9ae-431e-98b7-eb3801fdfd22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.574378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1af9461-d1f9-4531-92a3-7c8904519749" (UID: "d1af9461-d1f9-4531-92a3-7c8904519749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.590336 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7e393f0-f9ae-431e-98b7-eb3801fdfd22" (UID: "f7e393f0-f9ae-431e-98b7-eb3801fdfd22"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.599190 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7e393f0-f9ae-431e-98b7-eb3801fdfd22" (UID: "f7e393f0-f9ae-431e-98b7-eb3801fdfd22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616157 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616192 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616205 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616218 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9wn\" (UniqueName: \"kubernetes.io/projected/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-kube-api-access-9q9wn\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616231 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616244 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-logs\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616256 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1af9461-d1f9-4531-92a3-7c8904519749-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.616269 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e393f0-f9ae-431e-98b7-eb3801fdfd22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.636618 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61b7e836-b94e-4397-b34f-99bf775d778d" path="/var/lib/kubelet/pods/61b7e836-b94e-4397-b34f-99bf775d778d/volumes" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.712426 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.722379 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.735455 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:10:08 crc kubenswrapper[4832]: E0312 15:10:08.736797 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-log" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.736823 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-log" Mar 12 15:10:08 crc kubenswrapper[4832]: E0312 15:10:08.736861 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-api" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.736869 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-api" Mar 12 15:10:08 crc kubenswrapper[4832]: E0312 15:10:08.736887 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1af9461-d1f9-4531-92a3-7c8904519749" containerName="nova-scheduler-scheduler" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.736894 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1af9461-d1f9-4531-92a3-7c8904519749" containerName="nova-scheduler-scheduler" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.737126 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1af9461-d1f9-4531-92a3-7c8904519749" containerName="nova-scheduler-scheduler" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.737153 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-api" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.737171 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" containerName="nova-api-log" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.737888 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.741009 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.746333 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.934201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.934696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcndd\" (UniqueName: \"kubernetes.io/projected/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-kube-api-access-fcndd\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:08 crc kubenswrapper[4832]: I0312 15:10:08.934779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-config-data\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.035946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.036006 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcndd\" (UniqueName: \"kubernetes.io/projected/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-kube-api-access-fcndd\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.036040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-config-data\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.041106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-config-data\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.041955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.051926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcndd\" (UniqueName: \"kubernetes.io/projected/5435c879-37ba-4fb2-bfb5-a7ccbf3d474c-kube-api-access-fcndd\") pod \"nova-scheduler-0\" (UID: \"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c\") " pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.073020 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.384774 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"864eb0ae-dd5f-438f-81a0-e48bf297eecb","Type":"ContainerStarted","Data":"67ca7e6f332be0142fb94cfd7be10a7db7f305c956c4b19c552f112437bfe991"} Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.385930 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.386023 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"864eb0ae-dd5f-438f-81a0-e48bf297eecb","Type":"ContainerStarted","Data":"8cdbc4e537be5f21398de2b44975a6eebe8d4a3932b8e41ab588262721ab0685"} Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.386069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"864eb0ae-dd5f-438f-81a0-e48bf297eecb","Type":"ContainerStarted","Data":"801ca67efb862be74426a70b84dedde5b872f0a5bc78d69993e848b99d6184b5"} Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.413101 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.413074677 podStartE2EDuration="3.413074677s" podCreationTimestamp="2026-03-12 15:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:10:09.40970005 +0000 UTC m=+1368.053714296" watchObservedRunningTime="2026-03-12 15:10:09.413074677 +0000 UTC m=+1368.057088913" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.440300 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.473270 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.480190 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.482041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.486025 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.486703 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.486887 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.490363 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.534145 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.648635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkp4f\" (UniqueName: \"kubernetes.io/projected/3cbc1286-469f-4849-bb8a-4452af8d43d7-kube-api-access-xkp4f\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.648758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cbc1286-469f-4849-bb8a-4452af8d43d7-logs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.648890 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-config-data\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.648986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.649163 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.649205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.751444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkp4f\" (UniqueName: \"kubernetes.io/projected/3cbc1286-469f-4849-bb8a-4452af8d43d7-kube-api-access-xkp4f\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.751552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cbc1286-469f-4849-bb8a-4452af8d43d7-logs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.751585 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-config-data\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.751638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.751724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.751748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.752279 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cbc1286-469f-4849-bb8a-4452af8d43d7-logs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.755854 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.755934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.755982 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-config-data\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.756399 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbc1286-469f-4849-bb8a-4452af8d43d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.775734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkp4f\" (UniqueName: \"kubernetes.io/projected/3cbc1286-469f-4849-bb8a-4452af8d43d7-kube-api-access-xkp4f\") pod \"nova-api-0\" (UID: \"3cbc1286-469f-4849-bb8a-4452af8d43d7\") " pod="openstack/nova-api-0" Mar 12 15:10:09 crc kubenswrapper[4832]: I0312 15:10:09.796809 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 15:10:10 crc kubenswrapper[4832]: W0312 15:10:10.347761 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cbc1286_469f_4849_bb8a_4452af8d43d7.slice/crio-777c5575271cfe2f7a9c2a128c298d9dc97d3ae1f492ea815dee5eb9776e4c75 WatchSource:0}: Error finding container 777c5575271cfe2f7a9c2a128c298d9dc97d3ae1f492ea815dee5eb9776e4c75: Status 404 returned error can't find the container with id 777c5575271cfe2f7a9c2a128c298d9dc97d3ae1f492ea815dee5eb9776e4c75 Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.358082 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.402407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e","Type":"ContainerStarted","Data":"9ca39f4d628c8233d02ef404fdea4f8a8a0770229386be17be6dd2e4df29c236"} Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.404071 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.407171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cbc1286-469f-4849-bb8a-4452af8d43d7","Type":"ContainerStarted","Data":"777c5575271cfe2f7a9c2a128c298d9dc97d3ae1f492ea815dee5eb9776e4c75"} Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.418517 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c","Type":"ContainerStarted","Data":"d5699b2988306de476cec0ad15517e13ceb555c9d243743b9a6391ec33734a03"} Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.418573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5435c879-37ba-4fb2-bfb5-a7ccbf3d474c","Type":"ContainerStarted","Data":"906f63ed9ea09df058d097ddcf388a7c94139afbc0dd883c464eecf551884b42"} Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.431866 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.49343451 podStartE2EDuration="20.431846664s" podCreationTimestamp="2026-03-12 15:09:50 +0000 UTC" firstStartedPulling="2026-03-12 15:09:50.956919685 +0000 UTC m=+1349.600933911" lastFinishedPulling="2026-03-12 15:10:09.895331799 +0000 UTC m=+1368.539346065" observedRunningTime="2026-03-12 15:10:10.427404397 +0000 UTC m=+1369.071418643" watchObservedRunningTime="2026-03-12 15:10:10.431846664 +0000 UTC m=+1369.075860900" Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.640886 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1af9461-d1f9-4531-92a3-7c8904519749" path="/var/lib/kubelet/pods/d1af9461-d1f9-4531-92a3-7c8904519749/volumes" Mar 12 15:10:10 crc kubenswrapper[4832]: I0312 15:10:10.642318 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e393f0-f9ae-431e-98b7-eb3801fdfd22" path="/var/lib/kubelet/pods/f7e393f0-f9ae-431e-98b7-eb3801fdfd22/volumes" Mar 12 15:10:11 crc kubenswrapper[4832]: I0312 15:10:11.437743 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cbc1286-469f-4849-bb8a-4452af8d43d7","Type":"ContainerStarted","Data":"dd4035d1c5bb1a5488a3d67291fc660adce7212062d0931de1e33fbf9860ae55"} Mar 12 15:10:11 crc kubenswrapper[4832]: I0312 15:10:11.437830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cbc1286-469f-4849-bb8a-4452af8d43d7","Type":"ContainerStarted","Data":"545f3220a2a25fd2cf8420fd7ef9d31cefebcd0e7890f47989e5d15c46e2f378"} Mar 12 15:10:11 crc kubenswrapper[4832]: I0312 15:10:11.472795 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.472760805 podStartE2EDuration="3.472760805s" podCreationTimestamp="2026-03-12 15:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:10:10.450728415 +0000 UTC m=+1369.094742661" watchObservedRunningTime="2026-03-12 15:10:11.472760805 +0000 UTC m=+1370.116775081" Mar 12 15:10:12 crc kubenswrapper[4832]: I0312 15:10:12.031618 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 15:10:12 crc kubenswrapper[4832]: I0312 15:10:12.031669 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 15:10:12 crc kubenswrapper[4832]: E0312 15:10:12.545471 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a22ae2_4296_4673_b4eb_2ffd7065ac0a.slice/crio-conmon-629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:10:14 crc kubenswrapper[4832]: I0312 15:10:14.073304 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 15:10:17 crc kubenswrapper[4832]: I0312 15:10:17.031647 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 15:10:17 crc kubenswrapper[4832]: I0312 15:10:17.032053 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 15:10:18 crc kubenswrapper[4832]: I0312 15:10:18.047768 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="864eb0ae-dd5f-438f-81a0-e48bf297eecb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 15:10:18 crc kubenswrapper[4832]: I0312 15:10:18.047767 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="864eb0ae-dd5f-438f-81a0-e48bf297eecb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 15:10:19 crc kubenswrapper[4832]: I0312 15:10:19.074020 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 15:10:19 crc kubenswrapper[4832]: I0312 15:10:19.104593 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 15:10:19 crc kubenswrapper[4832]: I0312 15:10:19.131807 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=10.131779835 podStartE2EDuration="10.131779835s" podCreationTimestamp="2026-03-12 15:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:10:11.474137195 +0000 UTC m=+1370.118151461" watchObservedRunningTime="2026-03-12 15:10:19.131779835 +0000 UTC m=+1377.775794081" Mar 12 15:10:19 crc kubenswrapper[4832]: I0312 15:10:19.574791 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 15:10:19 crc kubenswrapper[4832]: I0312 15:10:19.797915 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:10:19 crc kubenswrapper[4832]: I0312 15:10:19.798240 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 15:10:20 crc kubenswrapper[4832]: I0312 15:10:20.459057 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 15:10:20 crc kubenswrapper[4832]: I0312 15:10:20.815751 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cbc1286-469f-4849-bb8a-4452af8d43d7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 15:10:20 crc kubenswrapper[4832]: I0312 15:10:20.815756 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cbc1286-469f-4849-bb8a-4452af8d43d7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 15:10:22 crc kubenswrapper[4832]: E0312 15:10:22.915562 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a22ae2_4296_4673_b4eb_2ffd7065ac0a.slice/crio-conmon-629ef02557ec5e052c77017b16b74f188878aae766f79a6ef451f15c8ad2034e.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:10:26 crc kubenswrapper[4832]: I0312 15:10:26.314808 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:10:26 crc kubenswrapper[4832]: I0312 15:10:26.315577 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:10:27 crc kubenswrapper[4832]: I0312 15:10:27.094734 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 15:10:27 crc kubenswrapper[4832]: I0312 15:10:27.098809 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 15:10:27 crc kubenswrapper[4832]: I0312 15:10:27.101515 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 15:10:27 crc kubenswrapper[4832]: I0312 15:10:27.622767 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 15:10:29 crc kubenswrapper[4832]: I0312 15:10:29.804408 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 15:10:29 crc kubenswrapper[4832]: I0312 15:10:29.804928 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 15:10:29 crc kubenswrapper[4832]: I0312 15:10:29.805521 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 15:10:29 crc kubenswrapper[4832]: I0312 15:10:29.805548 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 15:10:29 crc kubenswrapper[4832]: I0312 15:10:29.815959 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 15:10:29 crc kubenswrapper[4832]: I0312 15:10:29.816174 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 15:10:37 crc kubenswrapper[4832]: I0312 15:10:37.300695 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:10:38 crc kubenswrapper[4832]: I0312 15:10:38.131361 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:10:41 crc kubenswrapper[4832]: I0312 15:10:41.429964 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerName="rabbitmq" containerID="cri-o://3c0588cb410895fd66ee00fd6cf380356c166231ccdfb260fbf0b23956184623" gracePeriod=604796 Mar 12 15:10:42 crc kubenswrapper[4832]: I0312 15:10:42.926699 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerName="rabbitmq" containerID="cri-o://acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801" gracePeriod=604796 Mar 12 15:10:45 crc kubenswrapper[4832]: I0312 15:10:45.188725 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Mar 12 15:10:45 crc kubenswrapper[4832]: I0312 15:10:45.531863 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Mar 12 15:10:47 crc kubenswrapper[4832]: I0312 15:10:47.720527 4832 scope.go:117] "RemoveContainer" containerID="8f449e1ebac5bcdb683faf9c6dcdb34c6f4dbf1557ac4f355e25bbb97c9ca157" Mar 12 15:10:47 crc kubenswrapper[4832]: I0312 15:10:47.841693 4832 generic.go:334] "Generic (PLEG): container finished" podID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerID="3c0588cb410895fd66ee00fd6cf380356c166231ccdfb260fbf0b23956184623" exitCode=0 Mar 12 15:10:47 crc kubenswrapper[4832]: I0312 15:10:47.841783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fef23d2a-252b-4733-bb4e-e83d5de2f4f4","Type":"ContainerDied","Data":"3c0588cb410895fd66ee00fd6cf380356c166231ccdfb260fbf0b23956184623"} Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.054056 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112098 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmm6j\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-kube-api-access-lmm6j\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112161 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112213 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-server-conf\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-tls\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112263 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-erlang-cookie\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112352 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-pod-info\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112470 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-confd\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112498 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-config-data\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112586 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-erlang-cookie-secret\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112616 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-plugins\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.112646 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-plugins-conf\") pod \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\" (UID: \"fef23d2a-252b-4733-bb4e-e83d5de2f4f4\") " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.115251 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.123679 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.133868 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.137973 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-kube-api-access-lmm6j" (OuterVolumeSpecName: "kube-api-access-lmm6j") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "kube-api-access-lmm6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.138775 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-pod-info" (OuterVolumeSpecName: "pod-info") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.140976 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.174769 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.174871 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.209133 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-config-data" (OuterVolumeSpecName: "config-data") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214882 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214914 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214926 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214936 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214946 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214954 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214961 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214969 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.214979 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmm6j\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-kube-api-access-lmm6j\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.253597 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.273401 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-server-conf" (OuterVolumeSpecName: "server-conf") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.317847 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.317876 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.362665 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fef23d2a-252b-4733-bb4e-e83d5de2f4f4" (UID: "fef23d2a-252b-4733-bb4e-e83d5de2f4f4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.419635 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fef23d2a-252b-4733-bb4e-e83d5de2f4f4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.854114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fef23d2a-252b-4733-bb4e-e83d5de2f4f4","Type":"ContainerDied","Data":"aa7e1a390303b63284f4cafc49be1f9e64a9713a984093bbd0099f1abed489ab"} Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.854191 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.854934 4832 scope.go:117] "RemoveContainer" containerID="3c0588cb410895fd66ee00fd6cf380356c166231ccdfb260fbf0b23956184623" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.886962 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.906775 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.921555 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:10:48 crc kubenswrapper[4832]: E0312 15:10:48.922902 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerName="setup-container" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.922925 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerName="setup-container" Mar 12 15:10:48 crc kubenswrapper[4832]: E0312 15:10:48.922948 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerName="rabbitmq" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.922956 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerName="rabbitmq" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.923176 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" containerName="rabbitmq" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.924152 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.929167 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.929193 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.929946 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.930062 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.930187 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.930386 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r92zl" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.930493 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.933053 4832 scope.go:117] "RemoveContainer" containerID="93b6dcd69cc1d58f2baf9259c24e904358b6c3d70cf097eb0116925f4f7421f6" Mar 12 15:10:48 crc kubenswrapper[4832]: I0312 15:10:48.943127 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.032747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.032791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.032916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033025 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033239 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033459 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rct\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-kube-api-access-m6rct\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.033626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135767 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6rct\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-kube-api-access-m6rct\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.135993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.136495 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.136907 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.137737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.138038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.139996 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.140032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.141232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.143788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.145831 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.151193 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.154664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6rct\" (UniqueName: \"kubernetes.io/projected/b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3-kube-api-access-m6rct\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.183094 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3\") " pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.336905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.506656 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648062 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-plugins-conf\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648390 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw276\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-kube-api-access-pw276\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-config-data\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-erlang-cookie-secret\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648496 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-confd\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648616 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-tls\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648644 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-server-conf\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648734 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-plugins\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648760 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-erlang-cookie\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.648798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-pod-info\") pod \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\" (UID: \"667d5405-474a-4ab3-bcbf-8fd5d1c179aa\") " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.649579 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.654943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.656933 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.658349 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.661739 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.666687 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-kube-api-access-pw276" (OuterVolumeSpecName: "kube-api-access-pw276") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "kube-api-access-pw276". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.668192 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-pod-info" (OuterVolumeSpecName: "pod-info") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.671587 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.703667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-config-data" (OuterVolumeSpecName: "config-data") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.714996 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-server-conf" (OuterVolumeSpecName: "server-conf") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751860 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751898 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751910 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751923 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751935 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751945 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751965 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751979 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw276\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-kube-api-access-pw276\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.751990 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.752000 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.788553 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.795952 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "667d5405-474a-4ab3-bcbf-8fd5d1c179aa" (UID: "667d5405-474a-4ab3-bcbf-8fd5d1c179aa"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.853574 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/667d5405-474a-4ab3-bcbf-8fd5d1c179aa-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.853616 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.867825 4832 generic.go:334] "Generic (PLEG): container finished" podID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerID="acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801" exitCode=0 Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.867862 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"667d5405-474a-4ab3-bcbf-8fd5d1c179aa","Type":"ContainerDied","Data":"acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801"} Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.867883 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"667d5405-474a-4ab3-bcbf-8fd5d1c179aa","Type":"ContainerDied","Data":"756da955fede94e319835c2f5a18954bc71395a0426d68f26b064f8ce905f05b"} Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.867900 4832 scope.go:117] "RemoveContainer" containerID="acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.867999 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.905561 4832 scope.go:117] "RemoveContainer" containerID="c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.908762 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 15:10:49 crc kubenswrapper[4832]: W0312 15:10:49.917259 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb59b4c2e_d6b5_401b_b7a2_0faf2920fcb3.slice/crio-6a38e1ec56aadc007aa374b4c3774439c4a0d2a340c9acc5e516952f3d8f1c7c WatchSource:0}: Error finding container 6a38e1ec56aadc007aa374b4c3774439c4a0d2a340c9acc5e516952f3d8f1c7c: Status 404 returned error can't find the container with id 6a38e1ec56aadc007aa374b4c3774439c4a0d2a340c9acc5e516952f3d8f1c7c Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.918378 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.926660 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.952008 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:10:49 crc kubenswrapper[4832]: E0312 15:10:49.952677 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerName="rabbitmq" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.952699 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerName="rabbitmq" Mar 12 15:10:49 crc kubenswrapper[4832]: E0312 15:10:49.952740 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerName="setup-container" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.952750 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerName="setup-container" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.952952 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" containerName="rabbitmq" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.954578 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.958490 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.958696 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.958707 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.958859 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.959191 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-52mlh" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.959445 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.959690 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 15:10:49 crc kubenswrapper[4832]: I0312 15:10:49.972880 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.015149 4832 scope.go:117] "RemoveContainer" containerID="acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801" Mar 12 15:10:50 crc kubenswrapper[4832]: E0312 15:10:50.016023 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801\": container with ID starting with acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801 not found: ID does not exist" containerID="acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.016058 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801"} err="failed to get container status \"acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801\": rpc error: code = NotFound desc = could not find container \"acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801\": container with ID starting with acc4670b7f5b5d88e913b3df9c986d9106a78f0d8af8878522b42dc6510e2801 not found: ID does not exist" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.016084 4832 scope.go:117] "RemoveContainer" containerID="c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88" Mar 12 15:10:50 crc kubenswrapper[4832]: E0312 15:10:50.019599 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88\": container with ID starting with c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88 not found: ID does not exist" containerID="c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.019659 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88"} err="failed to get container status \"c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88\": rpc error: code = NotFound desc = could not find container \"c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88\": container with ID starting with c92c8d3b9a7e2cdb4724ce742d8387f839d1938d182a7d4c794c41617a533a88 not found: ID does not exist" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058464 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058647 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058679 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r78dw\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-kube-api-access-r78dw\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.058863 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.160458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.160560 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.160613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.160636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.160662 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.160671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161267 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r78dw\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-kube-api-access-r78dw\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161414 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.161926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.162053 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.162253 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.163402 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.168781 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.170899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.175835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.176142 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.191158 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r78dw\" (UniqueName: \"kubernetes.io/projected/dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c-kube-api-access-r78dw\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.198311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.275587 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-nznlw"] Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.277395 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.279896 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.291262 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-nznlw"] Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.296320 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.364773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.364842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.364899 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp777\" (UniqueName: \"kubernetes.io/projected/511cb178-234d-403e-9b90-d0d90648a5d8-kube-api-access-kp777\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.364942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-svc\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.364973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.365000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.365043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-config\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.466384 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.466732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.466775 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp777\" (UniqueName: \"kubernetes.io/projected/511cb178-234d-403e-9b90-d0d90648a5d8-kube-api-access-kp777\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.466801 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-svc\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.466826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.466847 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.466886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-config\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.467519 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.467837 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-config\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.468295 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-svc\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.472637 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.475239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.475290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.494362 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp777\" (UniqueName: \"kubernetes.io/projected/511cb178-234d-403e-9b90-d0d90648a5d8-kube-api-access-kp777\") pod \"dnsmasq-dns-d558885bc-nznlw\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.595390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.631422 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667d5405-474a-4ab3-bcbf-8fd5d1c179aa" path="/var/lib/kubelet/pods/667d5405-474a-4ab3-bcbf-8fd5d1c179aa/volumes" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.632673 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef23d2a-252b-4733-bb4e-e83d5de2f4f4" path="/var/lib/kubelet/pods/fef23d2a-252b-4733-bb4e-e83d5de2f4f4/volumes" Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.742136 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.879647 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3","Type":"ContainerStarted","Data":"6a38e1ec56aadc007aa374b4c3774439c4a0d2a340c9acc5e516952f3d8f1c7c"} Mar 12 15:10:50 crc kubenswrapper[4832]: I0312 15:10:50.882007 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c","Type":"ContainerStarted","Data":"94b096ec8c8035b0574f646f2216814c93104d8ddd7f8d2f1fa6cdd81ab78ad0"} Mar 12 15:10:51 crc kubenswrapper[4832]: W0312 15:10:51.086106 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod511cb178_234d_403e_9b90_d0d90648a5d8.slice/crio-8ca1846c2f6966cd7fe65c32fd3ce5a49d235bfa8a1ad6bcd6afb1c21dab7cce WatchSource:0}: Error finding container 8ca1846c2f6966cd7fe65c32fd3ce5a49d235bfa8a1ad6bcd6afb1c21dab7cce: Status 404 returned error can't find the container with id 8ca1846c2f6966cd7fe65c32fd3ce5a49d235bfa8a1ad6bcd6afb1c21dab7cce Mar 12 15:10:51 crc kubenswrapper[4832]: I0312 15:10:51.093472 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-nznlw"] Mar 12 15:10:51 crc kubenswrapper[4832]: I0312 15:10:51.895695 4832 generic.go:334] "Generic (PLEG): container finished" podID="511cb178-234d-403e-9b90-d0d90648a5d8" containerID="adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5" exitCode=0 Mar 12 15:10:51 crc kubenswrapper[4832]: I0312 15:10:51.896753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-nznlw" event={"ID":"511cb178-234d-403e-9b90-d0d90648a5d8","Type":"ContainerDied","Data":"adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5"} Mar 12 15:10:51 crc kubenswrapper[4832]: I0312 15:10:51.896783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-nznlw" event={"ID":"511cb178-234d-403e-9b90-d0d90648a5d8","Type":"ContainerStarted","Data":"8ca1846c2f6966cd7fe65c32fd3ce5a49d235bfa8a1ad6bcd6afb1c21dab7cce"} Mar 12 15:10:51 crc kubenswrapper[4832]: I0312 15:10:51.901384 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3","Type":"ContainerStarted","Data":"9602510d6640951adf8fb860034ef5028f7e0eca02f1fc5c9ded7986f4d2097d"} Mar 12 15:10:52 crc kubenswrapper[4832]: I0312 15:10:52.912205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c","Type":"ContainerStarted","Data":"1a8f474263c7933e6080efde797151bc0fa597a6caee02c80c80e5f37d6f4367"} Mar 12 15:10:52 crc kubenswrapper[4832]: I0312 15:10:52.914295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-nznlw" event={"ID":"511cb178-234d-403e-9b90-d0d90648a5d8","Type":"ContainerStarted","Data":"334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac"} Mar 12 15:10:52 crc kubenswrapper[4832]: I0312 15:10:52.914530 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:10:52 crc kubenswrapper[4832]: I0312 15:10:52.979900 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-nznlw" podStartSLOduration=2.979880082 podStartE2EDuration="2.979880082s" podCreationTimestamp="2026-03-12 15:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:10:52.969081723 +0000 UTC m=+1411.613095949" watchObservedRunningTime="2026-03-12 15:10:52.979880082 +0000 UTC m=+1411.623894308" Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.314176 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.314811 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.314864 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.315698 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce9bdbc63a202a02b8c581188fe9e262cc502ac13b76d8ec31fd67aea15a9bba"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.315772 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://ce9bdbc63a202a02b8c581188fe9e262cc502ac13b76d8ec31fd67aea15a9bba" gracePeriod=600 Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.971501 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="ce9bdbc63a202a02b8c581188fe9e262cc502ac13b76d8ec31fd67aea15a9bba" exitCode=0 Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.971617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"ce9bdbc63a202a02b8c581188fe9e262cc502ac13b76d8ec31fd67aea15a9bba"} Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.971892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3"} Mar 12 15:10:56 crc kubenswrapper[4832]: I0312 15:10:56.971919 4832 scope.go:117] "RemoveContainer" containerID="b5ef664f80b54949b800723b7418c7329f135481aab0e1a581de0cbcca235b5b" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.597797 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.689628 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l26ps"] Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.689879 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" podUID="09318380-c905-4904-823c-7d0fa5e1b37c" containerName="dnsmasq-dns" containerID="cri-o://0f491f667ccc69de62df38373fa447b6b72d301ec1341fbc75a58f91850721f4" gracePeriod=10 Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.915767 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-478qp"] Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.921091 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.950593 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-478qp"] Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.975589 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.975930 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.976184 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.976360 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdm9\" (UniqueName: \"kubernetes.io/projected/79812a9d-e99f-418d-98c0-c9005079c950-kube-api-access-bgdm9\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.976449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-config\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.976475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:00 crc kubenswrapper[4832]: I0312 15:11:00.976591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.042131 4832 generic.go:334] "Generic (PLEG): container finished" podID="09318380-c905-4904-823c-7d0fa5e1b37c" containerID="0f491f667ccc69de62df38373fa447b6b72d301ec1341fbc75a58f91850721f4" exitCode=0 Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.042166 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" event={"ID":"09318380-c905-4904-823c-7d0fa5e1b37c","Type":"ContainerDied","Data":"0f491f667ccc69de62df38373fa447b6b72d301ec1341fbc75a58f91850721f4"} Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.078819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdm9\" (UniqueName: \"kubernetes.io/projected/79812a9d-e99f-418d-98c0-c9005079c950-kube-api-access-bgdm9\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.078878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-config\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.078898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.078932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.078965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.078989 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.079049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.080021 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.080067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.080417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.080615 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.080685 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.080833 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79812a9d-e99f-418d-98c0-c9005079c950-config\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.103343 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdm9\" (UniqueName: \"kubernetes.io/projected/79812a9d-e99f-418d-98c0-c9005079c950-kube-api-access-bgdm9\") pod \"dnsmasq-dns-78c64bc9c5-478qp\" (UID: \"79812a9d-e99f-418d-98c0-c9005079c950\") " pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.244349 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.344346 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.382791 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-sb\") pod \"09318380-c905-4904-823c-7d0fa5e1b37c\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.382857 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-svc\") pod \"09318380-c905-4904-823c-7d0fa5e1b37c\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.382893 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-swift-storage-0\") pod \"09318380-c905-4904-823c-7d0fa5e1b37c\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.382923 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z58rj\" (UniqueName: \"kubernetes.io/projected/09318380-c905-4904-823c-7d0fa5e1b37c-kube-api-access-z58rj\") pod \"09318380-c905-4904-823c-7d0fa5e1b37c\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.382972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-nb\") pod \"09318380-c905-4904-823c-7d0fa5e1b37c\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.383019 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-config\") pod \"09318380-c905-4904-823c-7d0fa5e1b37c\" (UID: \"09318380-c905-4904-823c-7d0fa5e1b37c\") " Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.399884 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09318380-c905-4904-823c-7d0fa5e1b37c-kube-api-access-z58rj" (OuterVolumeSpecName: "kube-api-access-z58rj") pod "09318380-c905-4904-823c-7d0fa5e1b37c" (UID: "09318380-c905-4904-823c-7d0fa5e1b37c"). InnerVolumeSpecName "kube-api-access-z58rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.450810 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09318380-c905-4904-823c-7d0fa5e1b37c" (UID: "09318380-c905-4904-823c-7d0fa5e1b37c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.454629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-config" (OuterVolumeSpecName: "config") pod "09318380-c905-4904-823c-7d0fa5e1b37c" (UID: "09318380-c905-4904-823c-7d0fa5e1b37c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.481972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09318380-c905-4904-823c-7d0fa5e1b37c" (UID: "09318380-c905-4904-823c-7d0fa5e1b37c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.489743 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.489777 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z58rj\" (UniqueName: \"kubernetes.io/projected/09318380-c905-4904-823c-7d0fa5e1b37c-kube-api-access-z58rj\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.489789 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.489800 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.492083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09318380-c905-4904-823c-7d0fa5e1b37c" (UID: "09318380-c905-4904-823c-7d0fa5e1b37c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.502237 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09318380-c905-4904-823c-7d0fa5e1b37c" (UID: "09318380-c905-4904-823c-7d0fa5e1b37c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.591444 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.591476 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09318380-c905-4904-823c-7d0fa5e1b37c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:01 crc kubenswrapper[4832]: I0312 15:11:01.769243 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-478qp"] Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.054251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" event={"ID":"09318380-c905-4904-823c-7d0fa5e1b37c","Type":"ContainerDied","Data":"71adeb18bf13a6934c0ab5ea37f336249faa56d3f09a292d85924822af6ce6b7"} Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.054538 4832 scope.go:117] "RemoveContainer" containerID="0f491f667ccc69de62df38373fa447b6b72d301ec1341fbc75a58f91850721f4" Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.054653 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l26ps" Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.058091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" event={"ID":"79812a9d-e99f-418d-98c0-c9005079c950","Type":"ContainerStarted","Data":"b3b2e462e0428a0250f004626e7b04d3772b2aff2673aaa9bdb43fef076a07e6"} Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.058129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" event={"ID":"79812a9d-e99f-418d-98c0-c9005079c950","Type":"ContainerStarted","Data":"4074e375b1c96a5443499d963bff6dae455ae7c3047b54f38024b153344e33b9"} Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.125355 4832 scope.go:117] "RemoveContainer" containerID="bc11ac11f0318e3ac5c8aef39593bad244e0e8edbe62b6faa622d3a9d45af635" Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.129468 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l26ps"] Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.139946 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l26ps"] Mar 12 15:11:02 crc kubenswrapper[4832]: I0312 15:11:02.652172 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09318380-c905-4904-823c-7d0fa5e1b37c" path="/var/lib/kubelet/pods/09318380-c905-4904-823c-7d0fa5e1b37c/volumes" Mar 12 15:11:03 crc kubenswrapper[4832]: I0312 15:11:03.074130 4832 generic.go:334] "Generic (PLEG): container finished" podID="79812a9d-e99f-418d-98c0-c9005079c950" containerID="b3b2e462e0428a0250f004626e7b04d3772b2aff2673aaa9bdb43fef076a07e6" exitCode=0 Mar 12 15:11:03 crc kubenswrapper[4832]: I0312 15:11:03.074179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" event={"ID":"79812a9d-e99f-418d-98c0-c9005079c950","Type":"ContainerDied","Data":"b3b2e462e0428a0250f004626e7b04d3772b2aff2673aaa9bdb43fef076a07e6"} Mar 12 15:11:04 crc kubenswrapper[4832]: I0312 15:11:04.086031 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" event={"ID":"79812a9d-e99f-418d-98c0-c9005079c950","Type":"ContainerStarted","Data":"2f063a1d39f04b978093dfd5e5d94644ad081564de12d66799f3722c61a825fb"} Mar 12 15:11:04 crc kubenswrapper[4832]: I0312 15:11:04.086776 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:04 crc kubenswrapper[4832]: I0312 15:11:04.123756 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" podStartSLOduration=4.123703207 podStartE2EDuration="4.123703207s" podCreationTimestamp="2026-03-12 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:11:04.112211178 +0000 UTC m=+1422.756225414" watchObservedRunningTime="2026-03-12 15:11:04.123703207 +0000 UTC m=+1422.767717443" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.797962 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xsm2b"] Mar 12 15:11:09 crc kubenswrapper[4832]: E0312 15:11:09.799686 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09318380-c905-4904-823c-7d0fa5e1b37c" containerName="dnsmasq-dns" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.799709 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09318380-c905-4904-823c-7d0fa5e1b37c" containerName="dnsmasq-dns" Mar 12 15:11:09 crc kubenswrapper[4832]: E0312 15:11:09.799735 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09318380-c905-4904-823c-7d0fa5e1b37c" containerName="init" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.799745 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09318380-c905-4904-823c-7d0fa5e1b37c" containerName="init" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.799962 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="09318380-c905-4904-823c-7d0fa5e1b37c" containerName="dnsmasq-dns" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.802034 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.819016 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsm2b"] Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.950773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-utilities\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.950835 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-catalog-content\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:09 crc kubenswrapper[4832]: I0312 15:11:09.950926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475n2\" (UniqueName: \"kubernetes.io/projected/58d9d1af-b37a-4497-bafb-62853f205563-kube-api-access-475n2\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.053035 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475n2\" (UniqueName: \"kubernetes.io/projected/58d9d1af-b37a-4497-bafb-62853f205563-kube-api-access-475n2\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.053218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-utilities\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.053269 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-catalog-content\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.053746 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-utilities\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.053786 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-catalog-content\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.086662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475n2\" (UniqueName: \"kubernetes.io/projected/58d9d1af-b37a-4497-bafb-62853f205563-kube-api-access-475n2\") pod \"redhat-operators-xsm2b\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.123288 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:10 crc kubenswrapper[4832]: I0312 15:11:10.614556 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsm2b"] Mar 12 15:11:11 crc kubenswrapper[4832]: I0312 15:11:11.151928 4832 generic.go:334] "Generic (PLEG): container finished" podID="58d9d1af-b37a-4497-bafb-62853f205563" containerID="dc5ae64b701634e4b06c823db6d30d3e055a23e1cf260613c283d3378ffc4108" exitCode=0 Mar 12 15:11:11 crc kubenswrapper[4832]: I0312 15:11:11.152179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsm2b" event={"ID":"58d9d1af-b37a-4497-bafb-62853f205563","Type":"ContainerDied","Data":"dc5ae64b701634e4b06c823db6d30d3e055a23e1cf260613c283d3378ffc4108"} Mar 12 15:11:11 crc kubenswrapper[4832]: I0312 15:11:11.152234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsm2b" event={"ID":"58d9d1af-b37a-4497-bafb-62853f205563","Type":"ContainerStarted","Data":"db5ff094a0a99996236856a4c708cbc4bea56eaeb676ae56ab3063e97ed2d86f"} Mar 12 15:11:11 crc kubenswrapper[4832]: I0312 15:11:11.246783 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-478qp" Mar 12 15:11:11 crc kubenswrapper[4832]: I0312 15:11:11.331399 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-nznlw"] Mar 12 15:11:11 crc kubenswrapper[4832]: I0312 15:11:11.331692 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-nznlw" podUID="511cb178-234d-403e-9b90-d0d90648a5d8" containerName="dnsmasq-dns" containerID="cri-o://334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac" gracePeriod=10 Mar 12 15:11:11 crc kubenswrapper[4832]: I0312 15:11:11.814047 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.007417 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-svc\") pod \"511cb178-234d-403e-9b90-d0d90648a5d8\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.007518 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-sb\") pod \"511cb178-234d-403e-9b90-d0d90648a5d8\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.007539 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp777\" (UniqueName: \"kubernetes.io/projected/511cb178-234d-403e-9b90-d0d90648a5d8-kube-api-access-kp777\") pod \"511cb178-234d-403e-9b90-d0d90648a5d8\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.007687 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-openstack-edpm-ipam\") pod \"511cb178-234d-403e-9b90-d0d90648a5d8\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.007853 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-swift-storage-0\") pod \"511cb178-234d-403e-9b90-d0d90648a5d8\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.007909 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-config\") pod \"511cb178-234d-403e-9b90-d0d90648a5d8\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.007947 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-nb\") pod \"511cb178-234d-403e-9b90-d0d90648a5d8\" (UID: \"511cb178-234d-403e-9b90-d0d90648a5d8\") " Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.040726 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511cb178-234d-403e-9b90-d0d90648a5d8-kube-api-access-kp777" (OuterVolumeSpecName: "kube-api-access-kp777") pod "511cb178-234d-403e-9b90-d0d90648a5d8" (UID: "511cb178-234d-403e-9b90-d0d90648a5d8"). InnerVolumeSpecName "kube-api-access-kp777". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.067646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "511cb178-234d-403e-9b90-d0d90648a5d8" (UID: "511cb178-234d-403e-9b90-d0d90648a5d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.079747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "511cb178-234d-403e-9b90-d0d90648a5d8" (UID: "511cb178-234d-403e-9b90-d0d90648a5d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.081704 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "511cb178-234d-403e-9b90-d0d90648a5d8" (UID: "511cb178-234d-403e-9b90-d0d90648a5d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.088839 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-config" (OuterVolumeSpecName: "config") pod "511cb178-234d-403e-9b90-d0d90648a5d8" (UID: "511cb178-234d-403e-9b90-d0d90648a5d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.090451 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "511cb178-234d-403e-9b90-d0d90648a5d8" (UID: "511cb178-234d-403e-9b90-d0d90648a5d8"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.096000 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "511cb178-234d-403e-9b90-d0d90648a5d8" (UID: "511cb178-234d-403e-9b90-d0d90648a5d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.110217 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.110248 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.110257 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.110266 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.110274 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.110282 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp777\" (UniqueName: \"kubernetes.io/projected/511cb178-234d-403e-9b90-d0d90648a5d8-kube-api-access-kp777\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.110292 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/511cb178-234d-403e-9b90-d0d90648a5d8-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.163290 4832 generic.go:334] "Generic (PLEG): container finished" podID="511cb178-234d-403e-9b90-d0d90648a5d8" containerID="334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac" exitCode=0 Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.163327 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-nznlw" event={"ID":"511cb178-234d-403e-9b90-d0d90648a5d8","Type":"ContainerDied","Data":"334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac"} Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.163353 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-nznlw" event={"ID":"511cb178-234d-403e-9b90-d0d90648a5d8","Type":"ContainerDied","Data":"8ca1846c2f6966cd7fe65c32fd3ce5a49d235bfa8a1ad6bcd6afb1c21dab7cce"} Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.163361 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-nznlw" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.163369 4832 scope.go:117] "RemoveContainer" containerID="334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.202200 4832 scope.go:117] "RemoveContainer" containerID="adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.209298 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-nznlw"] Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.221068 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-nznlw"] Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.224822 4832 scope.go:117] "RemoveContainer" containerID="334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac" Mar 12 15:11:12 crc kubenswrapper[4832]: E0312 15:11:12.225215 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac\": container with ID starting with 334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac not found: ID does not exist" containerID="334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.225257 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac"} err="failed to get container status \"334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac\": rpc error: code = NotFound desc = could not find container \"334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac\": container with ID starting with 334ad3b4537a2ee5c0232c23f7250ea6eb6feccc4e8ddb3ef5b1b24f6a254aac not found: ID does not exist" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.225290 4832 scope.go:117] "RemoveContainer" containerID="adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5" Mar 12 15:11:12 crc kubenswrapper[4832]: E0312 15:11:12.225724 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5\": container with ID starting with adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5 not found: ID does not exist" containerID="adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.225776 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5"} err="failed to get container status \"adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5\": rpc error: code = NotFound desc = could not find container \"adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5\": container with ID starting with adc1123019fa22018b003be713edbb492c65b1d6049eb0ed338187370f4b56a5 not found: ID does not exist" Mar 12 15:11:12 crc kubenswrapper[4832]: I0312 15:11:12.638135 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511cb178-234d-403e-9b90-d0d90648a5d8" path="/var/lib/kubelet/pods/511cb178-234d-403e-9b90-d0d90648a5d8/volumes" Mar 12 15:11:13 crc kubenswrapper[4832]: I0312 15:11:13.177577 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsm2b" event={"ID":"58d9d1af-b37a-4497-bafb-62853f205563","Type":"ContainerStarted","Data":"2928c8ca4b6532204280f9d2a101e3ff4379e57334b045815482d13dd8a41545"} Mar 12 15:11:17 crc kubenswrapper[4832]: I0312 15:11:17.238177 4832 generic.go:334] "Generic (PLEG): container finished" podID="58d9d1af-b37a-4497-bafb-62853f205563" containerID="2928c8ca4b6532204280f9d2a101e3ff4379e57334b045815482d13dd8a41545" exitCode=0 Mar 12 15:11:17 crc kubenswrapper[4832]: I0312 15:11:17.238257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsm2b" event={"ID":"58d9d1af-b37a-4497-bafb-62853f205563","Type":"ContainerDied","Data":"2928c8ca4b6532204280f9d2a101e3ff4379e57334b045815482d13dd8a41545"} Mar 12 15:11:17 crc kubenswrapper[4832]: I0312 15:11:17.243190 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:11:18 crc kubenswrapper[4832]: I0312 15:11:18.251439 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsm2b" event={"ID":"58d9d1af-b37a-4497-bafb-62853f205563","Type":"ContainerStarted","Data":"b0c66124258ba2ef3aaeb06857f1396a87195ed754be0f7a1891978a230f5332"} Mar 12 15:11:18 crc kubenswrapper[4832]: I0312 15:11:18.282682 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xsm2b" podStartSLOduration=2.792791785 podStartE2EDuration="9.282656144s" podCreationTimestamp="2026-03-12 15:11:09 +0000 UTC" firstStartedPulling="2026-03-12 15:11:11.153595941 +0000 UTC m=+1429.797610167" lastFinishedPulling="2026-03-12 15:11:17.64346028 +0000 UTC m=+1436.287474526" observedRunningTime="2026-03-12 15:11:18.269062755 +0000 UTC m=+1436.913076991" watchObservedRunningTime="2026-03-12 15:11:18.282656144 +0000 UTC m=+1436.926670410" Mar 12 15:11:20 crc kubenswrapper[4832]: I0312 15:11:20.123637 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:20 crc kubenswrapper[4832]: I0312 15:11:20.124877 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:21 crc kubenswrapper[4832]: I0312 15:11:21.182024 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xsm2b" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="registry-server" probeResult="failure" output=< Mar 12 15:11:21 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Mar 12 15:11:21 crc kubenswrapper[4832]: > Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.304495 4832 generic.go:334] "Generic (PLEG): container finished" podID="b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3" containerID="9602510d6640951adf8fb860034ef5028f7e0eca02f1fc5c9ded7986f4d2097d" exitCode=0 Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.304537 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3","Type":"ContainerDied","Data":"9602510d6640951adf8fb860034ef5028f7e0eca02f1fc5c9ded7986f4d2097d"} Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.394280 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w"] Mar 12 15:11:24 crc kubenswrapper[4832]: E0312 15:11:24.394767 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511cb178-234d-403e-9b90-d0d90648a5d8" containerName="init" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.394787 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="511cb178-234d-403e-9b90-d0d90648a5d8" containerName="init" Mar 12 15:11:24 crc kubenswrapper[4832]: E0312 15:11:24.394812 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511cb178-234d-403e-9b90-d0d90648a5d8" containerName="dnsmasq-dns" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.394821 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="511cb178-234d-403e-9b90-d0d90648a5d8" containerName="dnsmasq-dns" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.395060 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="511cb178-234d-403e-9b90-d0d90648a5d8" containerName="dnsmasq-dns" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.395866 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.399414 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.399772 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.400140 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.400304 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.407941 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w"] Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.553794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mfbp\" (UniqueName: \"kubernetes.io/projected/9a6fe906-9add-49ec-ad85-4f7ba8034f73-kube-api-access-4mfbp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.554130 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.554284 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.554353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.656243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.656337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mfbp\" (UniqueName: \"kubernetes.io/projected/9a6fe906-9add-49ec-ad85-4f7ba8034f73-kube-api-access-4mfbp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.656364 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.656453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.661096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.665096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.679041 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.680391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mfbp\" (UniqueName: \"kubernetes.io/projected/9a6fe906-9add-49ec-ad85-4f7ba8034f73-kube-api-access-4mfbp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m622w\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:24 crc kubenswrapper[4832]: I0312 15:11:24.798467 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:25 crc kubenswrapper[4832]: I0312 15:11:25.319704 4832 generic.go:334] "Generic (PLEG): container finished" podID="dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c" containerID="1a8f474263c7933e6080efde797151bc0fa597a6caee02c80c80e5f37d6f4367" exitCode=0 Mar 12 15:11:25 crc kubenswrapper[4832]: I0312 15:11:25.319812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c","Type":"ContainerDied","Data":"1a8f474263c7933e6080efde797151bc0fa597a6caee02c80c80e5f37d6f4367"} Mar 12 15:11:25 crc kubenswrapper[4832]: I0312 15:11:25.330860 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3","Type":"ContainerStarted","Data":"d1099b9ca47e0134cec6f6c31fdf58de14141f615bbc231e1aa806af7d2b04b5"} Mar 12 15:11:25 crc kubenswrapper[4832]: I0312 15:11:25.331146 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 15:11:25 crc kubenswrapper[4832]: I0312 15:11:25.396409 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.396387438 podStartE2EDuration="37.396387438s" podCreationTimestamp="2026-03-12 15:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:11:25.376099437 +0000 UTC m=+1444.020113673" watchObservedRunningTime="2026-03-12 15:11:25.396387438 +0000 UTC m=+1444.040401734" Mar 12 15:11:25 crc kubenswrapper[4832]: W0312 15:11:25.406316 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6fe906_9add_49ec_ad85_4f7ba8034f73.slice/crio-79e093c7cce155426b60de31d6547510ca9a850584a432bd5b2a6d5f9f0f428f WatchSource:0}: Error finding container 79e093c7cce155426b60de31d6547510ca9a850584a432bd5b2a6d5f9f0f428f: Status 404 returned error can't find the container with id 79e093c7cce155426b60de31d6547510ca9a850584a432bd5b2a6d5f9f0f428f Mar 12 15:11:25 crc kubenswrapper[4832]: I0312 15:11:25.409224 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w"] Mar 12 15:11:26 crc kubenswrapper[4832]: I0312 15:11:26.341619 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c","Type":"ContainerStarted","Data":"19117abf2645f5450257d9b71236550c3b7affc93d2e5976972fb09227f9e06d"} Mar 12 15:11:26 crc kubenswrapper[4832]: I0312 15:11:26.341833 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:11:26 crc kubenswrapper[4832]: I0312 15:11:26.343121 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" event={"ID":"9a6fe906-9add-49ec-ad85-4f7ba8034f73","Type":"ContainerStarted","Data":"79e093c7cce155426b60de31d6547510ca9a850584a432bd5b2a6d5f9f0f428f"} Mar 12 15:11:26 crc kubenswrapper[4832]: I0312 15:11:26.373530 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.373490733 podStartE2EDuration="37.373490733s" podCreationTimestamp="2026-03-12 15:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:11:26.361739457 +0000 UTC m=+1445.005753713" watchObservedRunningTime="2026-03-12 15:11:26.373490733 +0000 UTC m=+1445.017504969" Mar 12 15:11:30 crc kubenswrapper[4832]: I0312 15:11:30.192436 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:30 crc kubenswrapper[4832]: I0312 15:11:30.245645 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:30 crc kubenswrapper[4832]: I0312 15:11:30.440790 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsm2b"] Mar 12 15:11:31 crc kubenswrapper[4832]: I0312 15:11:31.391989 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xsm2b" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="registry-server" containerID="cri-o://b0c66124258ba2ef3aaeb06857f1396a87195ed754be0f7a1891978a230f5332" gracePeriod=2 Mar 12 15:11:32 crc kubenswrapper[4832]: I0312 15:11:32.409061 4832 generic.go:334] "Generic (PLEG): container finished" podID="58d9d1af-b37a-4497-bafb-62853f205563" containerID="b0c66124258ba2ef3aaeb06857f1396a87195ed754be0f7a1891978a230f5332" exitCode=0 Mar 12 15:11:32 crc kubenswrapper[4832]: I0312 15:11:32.409158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsm2b" event={"ID":"58d9d1af-b37a-4497-bafb-62853f205563","Type":"ContainerDied","Data":"b0c66124258ba2ef3aaeb06857f1396a87195ed754be0f7a1891978a230f5332"} Mar 12 15:11:35 crc kubenswrapper[4832]: I0312 15:11:35.958078 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.121706 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-catalog-content\") pod \"58d9d1af-b37a-4497-bafb-62853f205563\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.121822 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-utilities\") pod \"58d9d1af-b37a-4497-bafb-62853f205563\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.121853 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475n2\" (UniqueName: \"kubernetes.io/projected/58d9d1af-b37a-4497-bafb-62853f205563-kube-api-access-475n2\") pod \"58d9d1af-b37a-4497-bafb-62853f205563\" (UID: \"58d9d1af-b37a-4497-bafb-62853f205563\") " Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.123059 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-utilities" (OuterVolumeSpecName: "utilities") pod "58d9d1af-b37a-4497-bafb-62853f205563" (UID: "58d9d1af-b37a-4497-bafb-62853f205563"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.126698 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d9d1af-b37a-4497-bafb-62853f205563-kube-api-access-475n2" (OuterVolumeSpecName: "kube-api-access-475n2") pod "58d9d1af-b37a-4497-bafb-62853f205563" (UID: "58d9d1af-b37a-4497-bafb-62853f205563"). InnerVolumeSpecName "kube-api-access-475n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.224633 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.224692 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-475n2\" (UniqueName: \"kubernetes.io/projected/58d9d1af-b37a-4497-bafb-62853f205563-kube-api-access-475n2\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.249300 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58d9d1af-b37a-4497-bafb-62853f205563" (UID: "58d9d1af-b37a-4497-bafb-62853f205563"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.326648 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d9d1af-b37a-4497-bafb-62853f205563-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.454177 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" event={"ID":"9a6fe906-9add-49ec-ad85-4f7ba8034f73","Type":"ContainerStarted","Data":"352faedd7bc18726c30725e044ddfd74ed476f0699ab676f04a3f0ddaf7f8603"} Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.457628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsm2b" event={"ID":"58d9d1af-b37a-4497-bafb-62853f205563","Type":"ContainerDied","Data":"db5ff094a0a99996236856a4c708cbc4bea56eaeb676ae56ab3063e97ed2d86f"} Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.457682 4832 scope.go:117] "RemoveContainer" containerID="b0c66124258ba2ef3aaeb06857f1396a87195ed754be0f7a1891978a230f5332" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.457820 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsm2b" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.495234 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" podStartSLOduration=2.211839686 podStartE2EDuration="12.495217515s" podCreationTimestamp="2026-03-12 15:11:24 +0000 UTC" firstStartedPulling="2026-03-12 15:11:25.413271791 +0000 UTC m=+1444.057286017" lastFinishedPulling="2026-03-12 15:11:35.69664962 +0000 UTC m=+1454.340663846" observedRunningTime="2026-03-12 15:11:36.491875509 +0000 UTC m=+1455.135889745" watchObservedRunningTime="2026-03-12 15:11:36.495217515 +0000 UTC m=+1455.139231741" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.505346 4832 scope.go:117] "RemoveContainer" containerID="2928c8ca4b6532204280f9d2a101e3ff4379e57334b045815482d13dd8a41545" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.522384 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsm2b"] Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.533525 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xsm2b"] Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.536198 4832 scope.go:117] "RemoveContainer" containerID="dc5ae64b701634e4b06c823db6d30d3e055a23e1cf260613c283d3378ffc4108" Mar 12 15:11:36 crc kubenswrapper[4832]: I0312 15:11:36.631215 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d9d1af-b37a-4497-bafb-62853f205563" path="/var/lib/kubelet/pods/58d9d1af-b37a-4497-bafb-62853f205563/volumes" Mar 12 15:11:39 crc kubenswrapper[4832]: I0312 15:11:39.339777 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 15:11:40 crc kubenswrapper[4832]: I0312 15:11:40.299665 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 15:11:46 crc kubenswrapper[4832]: I0312 15:11:46.572518 4832 generic.go:334] "Generic (PLEG): container finished" podID="9a6fe906-9add-49ec-ad85-4f7ba8034f73" containerID="352faedd7bc18726c30725e044ddfd74ed476f0699ab676f04a3f0ddaf7f8603" exitCode=0 Mar 12 15:11:46 crc kubenswrapper[4832]: I0312 15:11:46.572618 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" event={"ID":"9a6fe906-9add-49ec-ad85-4f7ba8034f73","Type":"ContainerDied","Data":"352faedd7bc18726c30725e044ddfd74ed476f0699ab676f04a3f0ddaf7f8603"} Mar 12 15:11:47 crc kubenswrapper[4832]: I0312 15:11:47.943999 4832 scope.go:117] "RemoveContainer" containerID="f8ab975f49e1daa1b7b54f67d387974ef6d60fc65ee8a9102a751775ec4a3764" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.043084 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.049138 4832 scope.go:117] "RemoveContainer" containerID="696a923cf37434e9108ef8cba4521c289308126a05a849e7b566b914adae7ec6" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.183704 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-repo-setup-combined-ca-bundle\") pod \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.184097 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-ssh-key-openstack-edpm-ipam\") pod \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.184140 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-inventory\") pod \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.184165 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mfbp\" (UniqueName: \"kubernetes.io/projected/9a6fe906-9add-49ec-ad85-4f7ba8034f73-kube-api-access-4mfbp\") pod \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\" (UID: \"9a6fe906-9add-49ec-ad85-4f7ba8034f73\") " Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.189799 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9a6fe906-9add-49ec-ad85-4f7ba8034f73" (UID: "9a6fe906-9add-49ec-ad85-4f7ba8034f73"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.190048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6fe906-9add-49ec-ad85-4f7ba8034f73-kube-api-access-4mfbp" (OuterVolumeSpecName: "kube-api-access-4mfbp") pod "9a6fe906-9add-49ec-ad85-4f7ba8034f73" (UID: "9a6fe906-9add-49ec-ad85-4f7ba8034f73"). InnerVolumeSpecName "kube-api-access-4mfbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.209287 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-inventory" (OuterVolumeSpecName: "inventory") pod "9a6fe906-9add-49ec-ad85-4f7ba8034f73" (UID: "9a6fe906-9add-49ec-ad85-4f7ba8034f73"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.212385 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a6fe906-9add-49ec-ad85-4f7ba8034f73" (UID: "9a6fe906-9add-49ec-ad85-4f7ba8034f73"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.286441 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.286473 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.286486 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mfbp\" (UniqueName: \"kubernetes.io/projected/9a6fe906-9add-49ec-ad85-4f7ba8034f73-kube-api-access-4mfbp\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.286497 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6fe906-9add-49ec-ad85-4f7ba8034f73-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.592387 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" event={"ID":"9a6fe906-9add-49ec-ad85-4f7ba8034f73","Type":"ContainerDied","Data":"79e093c7cce155426b60de31d6547510ca9a850584a432bd5b2a6d5f9f0f428f"} Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.592430 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e093c7cce155426b60de31d6547510ca9a850584a432bd5b2a6d5f9f0f428f" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.592452 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m622w" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.696734 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c"] Mar 12 15:11:48 crc kubenswrapper[4832]: E0312 15:11:48.697227 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="extract-content" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.697249 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="extract-content" Mar 12 15:11:48 crc kubenswrapper[4832]: E0312 15:11:48.697293 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="extract-utilities" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.697303 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="extract-utilities" Mar 12 15:11:48 crc kubenswrapper[4832]: E0312 15:11:48.697321 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6fe906-9add-49ec-ad85-4f7ba8034f73" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.697331 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6fe906-9add-49ec-ad85-4f7ba8034f73" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 15:11:48 crc kubenswrapper[4832]: E0312 15:11:48.697353 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="registry-server" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.697361 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="registry-server" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.697600 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6fe906-9add-49ec-ad85-4f7ba8034f73" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.697631 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d9d1af-b37a-4497-bafb-62853f205563" containerName="registry-server" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.698463 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.700948 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.701135 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.701134 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.701579 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.715922 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c"] Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.795822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.796195 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxpv\" (UniqueName: \"kubernetes.io/projected/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-kube-api-access-5rxpv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.796368 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.898305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.898458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxpv\" (UniqueName: \"kubernetes.io/projected/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-kube-api-access-5rxpv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.898553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.902398 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.903582 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:48 crc kubenswrapper[4832]: I0312 15:11:48.916268 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxpv\" (UniqueName: \"kubernetes.io/projected/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-kube-api-access-5rxpv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-v7m7c\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:49 crc kubenswrapper[4832]: I0312 15:11:49.036165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:49 crc kubenswrapper[4832]: I0312 15:11:49.599474 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c"] Mar 12 15:11:49 crc kubenswrapper[4832]: W0312 15:11:49.608958 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96ffa11d_c5f1_4b32_b2cc_6bc830cf4662.slice/crio-50b44af17b01443e166e9396a990c4dd7720e80f6f453d84bd7350bbde19e971 WatchSource:0}: Error finding container 50b44af17b01443e166e9396a990c4dd7720e80f6f453d84bd7350bbde19e971: Status 404 returned error can't find the container with id 50b44af17b01443e166e9396a990c4dd7720e80f6f453d84bd7350bbde19e971 Mar 12 15:11:50 crc kubenswrapper[4832]: I0312 15:11:50.636827 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" podStartSLOduration=2.191626705 podStartE2EDuration="2.636799305s" podCreationTimestamp="2026-03-12 15:11:48 +0000 UTC" firstStartedPulling="2026-03-12 15:11:49.611994715 +0000 UTC m=+1468.256008941" lastFinishedPulling="2026-03-12 15:11:50.057167295 +0000 UTC m=+1468.701181541" observedRunningTime="2026-03-12 15:11:50.635012663 +0000 UTC m=+1469.279026939" watchObservedRunningTime="2026-03-12 15:11:50.636799305 +0000 UTC m=+1469.280813571" Mar 12 15:11:50 crc kubenswrapper[4832]: I0312 15:11:50.645485 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" event={"ID":"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662","Type":"ContainerStarted","Data":"d008f1d95887f7d678f089b4d997696bc0f01c3f4eca63ec3397cadb5c3407a6"} Mar 12 15:11:50 crc kubenswrapper[4832]: I0312 15:11:50.645580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" event={"ID":"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662","Type":"ContainerStarted","Data":"50b44af17b01443e166e9396a990c4dd7720e80f6f453d84bd7350bbde19e971"} Mar 12 15:11:53 crc kubenswrapper[4832]: I0312 15:11:53.656240 4832 generic.go:334] "Generic (PLEG): container finished" podID="96ffa11d-c5f1-4b32-b2cc-6bc830cf4662" containerID="d008f1d95887f7d678f089b4d997696bc0f01c3f4eca63ec3397cadb5c3407a6" exitCode=0 Mar 12 15:11:53 crc kubenswrapper[4832]: I0312 15:11:53.656377 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" event={"ID":"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662","Type":"ContainerDied","Data":"d008f1d95887f7d678f089b4d997696bc0f01c3f4eca63ec3397cadb5c3407a6"} Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.080054 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.236942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-inventory\") pod \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.236984 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rxpv\" (UniqueName: \"kubernetes.io/projected/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-kube-api-access-5rxpv\") pod \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.237007 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-ssh-key-openstack-edpm-ipam\") pod \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\" (UID: \"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662\") " Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.244043 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-kube-api-access-5rxpv" (OuterVolumeSpecName: "kube-api-access-5rxpv") pod "96ffa11d-c5f1-4b32-b2cc-6bc830cf4662" (UID: "96ffa11d-c5f1-4b32-b2cc-6bc830cf4662"). InnerVolumeSpecName "kube-api-access-5rxpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.270326 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-inventory" (OuterVolumeSpecName: "inventory") pod "96ffa11d-c5f1-4b32-b2cc-6bc830cf4662" (UID: "96ffa11d-c5f1-4b32-b2cc-6bc830cf4662"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.274777 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96ffa11d-c5f1-4b32-b2cc-6bc830cf4662" (UID: "96ffa11d-c5f1-4b32-b2cc-6bc830cf4662"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.339942 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.339998 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rxpv\" (UniqueName: \"kubernetes.io/projected/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-kube-api-access-5rxpv\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.340021 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ffa11d-c5f1-4b32-b2cc-6bc830cf4662-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.685926 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" event={"ID":"96ffa11d-c5f1-4b32-b2cc-6bc830cf4662","Type":"ContainerDied","Data":"50b44af17b01443e166e9396a990c4dd7720e80f6f453d84bd7350bbde19e971"} Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.686007 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b44af17b01443e166e9396a990c4dd7720e80f6f453d84bd7350bbde19e971" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.686127 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-v7m7c" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.763566 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j"] Mar 12 15:11:55 crc kubenswrapper[4832]: E0312 15:11:55.764068 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ffa11d-c5f1-4b32-b2cc-6bc830cf4662" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.764088 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ffa11d-c5f1-4b32-b2cc-6bc830cf4662" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.764411 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ffa11d-c5f1-4b32-b2cc-6bc830cf4662" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.765305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.768144 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.768237 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.768289 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.769964 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.779349 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j"] Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.849803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.850223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.850463 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjr42\" (UniqueName: \"kubernetes.io/projected/4787eb10-18fc-4da2-98ce-246687619641-kube-api-access-kjr42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.850857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.953284 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.953349 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.953415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjr42\" (UniqueName: \"kubernetes.io/projected/4787eb10-18fc-4da2-98ce-246687619641-kube-api-access-kjr42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.953467 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.957203 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.958144 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.963195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:55 crc kubenswrapper[4832]: I0312 15:11:55.974795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjr42\" (UniqueName: \"kubernetes.io/projected/4787eb10-18fc-4da2-98ce-246687619641-kube-api-access-kjr42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:56 crc kubenswrapper[4832]: I0312 15:11:56.084794 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:11:56 crc kubenswrapper[4832]: I0312 15:11:56.684063 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j"] Mar 12 15:11:56 crc kubenswrapper[4832]: W0312 15:11:56.688187 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4787eb10_18fc_4da2_98ce_246687619641.slice/crio-247dfd97423ea703751027945993b4babdd44a3327de1db4d919a534f53685e9 WatchSource:0}: Error finding container 247dfd97423ea703751027945993b4babdd44a3327de1db4d919a534f53685e9: Status 404 returned error can't find the container with id 247dfd97423ea703751027945993b4babdd44a3327de1db4d919a534f53685e9 Mar 12 15:11:56 crc kubenswrapper[4832]: E0312 15:11:56.797962 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Mar 12 15:11:56 crc kubenswrapper[4832]: E0312 15:11:56.798298 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 15:11:56 crc kubenswrapper[4832]: container &Container{Name:bootstrap-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p osp.edpm.bootstrap -i bootstrap-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Mar 12 15:11:56 crc kubenswrapper[4832]: osp.edpm.bootstrap Mar 12 15:11:56 crc kubenswrapper[4832]: Mar 12 15:11:56 crc kubenswrapper[4832]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Mar 12 15:11:56 crc kubenswrapper[4832]: edpm_override_hosts: openstack-edpm-ipam Mar 12 15:11:56 crc kubenswrapper[4832]: edpm_service_type: bootstrap Mar 12 15:11:56 crc kubenswrapper[4832]: Mar 12 15:11:56 crc kubenswrapper[4832]: Mar 12 15:11:56 crc kubenswrapper[4832]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bootstrap-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/bootstrap,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjr42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j_openstack(4787eb10-18fc-4da2-98ce-246687619641): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway) Mar 12 15:11:56 crc kubenswrapper[4832]: > logger="UnhandledError" Mar 12 15:11:56 crc kubenswrapper[4832]: E0312 15:11:56.799842 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bootstrap-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\"" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" podUID="4787eb10-18fc-4da2-98ce-246687619641" Mar 12 15:11:57 crc kubenswrapper[4832]: I0312 15:11:57.713396 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" event={"ID":"4787eb10-18fc-4da2-98ce-246687619641","Type":"ContainerStarted","Data":"247dfd97423ea703751027945993b4babdd44a3327de1db4d919a534f53685e9"} Mar 12 15:11:57 crc kubenswrapper[4832]: E0312 15:11:57.716183 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bootstrap-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" podUID="4787eb10-18fc-4da2-98ce-246687619641" Mar 12 15:11:58 crc kubenswrapper[4832]: E0312 15:11:58.728621 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bootstrap-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" podUID="4787eb10-18fc-4da2-98ce-246687619641" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.137693 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555472-hl7l5"] Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.139876 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-hl7l5" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.144906 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.145156 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.149660 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.151144 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-hl7l5"] Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.240414 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2z4\" (UniqueName: \"kubernetes.io/projected/ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e-kube-api-access-zv2z4\") pod \"auto-csr-approver-29555472-hl7l5\" (UID: \"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e\") " pod="openshift-infra/auto-csr-approver-29555472-hl7l5" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.342306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv2z4\" (UniqueName: \"kubernetes.io/projected/ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e-kube-api-access-zv2z4\") pod \"auto-csr-approver-29555472-hl7l5\" (UID: \"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e\") " pod="openshift-infra/auto-csr-approver-29555472-hl7l5" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.382245 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv2z4\" (UniqueName: \"kubernetes.io/projected/ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e-kube-api-access-zv2z4\") pod \"auto-csr-approver-29555472-hl7l5\" (UID: \"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e\") " pod="openshift-infra/auto-csr-approver-29555472-hl7l5" Mar 12 15:12:00 crc kubenswrapper[4832]: I0312 15:12:00.469445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-hl7l5" Mar 12 15:12:01 crc kubenswrapper[4832]: I0312 15:12:01.033897 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-hl7l5"] Mar 12 15:12:01 crc kubenswrapper[4832]: I0312 15:12:01.774373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555472-hl7l5" event={"ID":"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e","Type":"ContainerStarted","Data":"1daf951cc11bc2b65d352e02d78e428da1f8b59ae0a26a9750c49824d6b214b6"} Mar 12 15:12:02 crc kubenswrapper[4832]: I0312 15:12:02.787032 4832 generic.go:334] "Generic (PLEG): container finished" podID="ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e" containerID="b583147b9bfe2a872bc35bec47be57571b5c6846bae1277a496b67de764dd0e8" exitCode=0 Mar 12 15:12:02 crc kubenswrapper[4832]: I0312 15:12:02.787694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555472-hl7l5" event={"ID":"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e","Type":"ContainerDied","Data":"b583147b9bfe2a872bc35bec47be57571b5c6846bae1277a496b67de764dd0e8"} Mar 12 15:12:04 crc kubenswrapper[4832]: I0312 15:12:04.233365 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-hl7l5" Mar 12 15:12:04 crc kubenswrapper[4832]: I0312 15:12:04.363134 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv2z4\" (UniqueName: \"kubernetes.io/projected/ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e-kube-api-access-zv2z4\") pod \"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e\" (UID: \"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e\") " Mar 12 15:12:04 crc kubenswrapper[4832]: I0312 15:12:04.368741 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e-kube-api-access-zv2z4" (OuterVolumeSpecName: "kube-api-access-zv2z4") pod "ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e" (UID: "ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e"). InnerVolumeSpecName "kube-api-access-zv2z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:12:04 crc kubenswrapper[4832]: I0312 15:12:04.465703 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv2z4\" (UniqueName: \"kubernetes.io/projected/ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e-kube-api-access-zv2z4\") on node \"crc\" DevicePath \"\"" Mar 12 15:12:04 crc kubenswrapper[4832]: I0312 15:12:04.810737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555472-hl7l5" event={"ID":"ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e","Type":"ContainerDied","Data":"1daf951cc11bc2b65d352e02d78e428da1f8b59ae0a26a9750c49824d6b214b6"} Mar 12 15:12:04 crc kubenswrapper[4832]: I0312 15:12:04.810785 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1daf951cc11bc2b65d352e02d78e428da1f8b59ae0a26a9750c49824d6b214b6" Mar 12 15:12:04 crc kubenswrapper[4832]: I0312 15:12:04.810844 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-hl7l5" Mar 12 15:12:05 crc kubenswrapper[4832]: I0312 15:12:05.306244 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7qgq7"] Mar 12 15:12:05 crc kubenswrapper[4832]: I0312 15:12:05.315076 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7qgq7"] Mar 12 15:12:06 crc kubenswrapper[4832]: I0312 15:12:06.640264 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c0833b-5f89-41c6-99a8-a07a779a5e22" path="/var/lib/kubelet/pods/e4c0833b-5f89-41c6-99a8-a07a779a5e22/volumes" Mar 12 15:12:15 crc kubenswrapper[4832]: I0312 15:12:15.953196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" event={"ID":"4787eb10-18fc-4da2-98ce-246687619641","Type":"ContainerStarted","Data":"79d5b35656326e4892fac27992e277384c7a4d1a87438a19f7ee8980162093ef"} Mar 12 15:12:15 crc kubenswrapper[4832]: I0312 15:12:15.984496 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" podStartSLOduration=2.537001388 podStartE2EDuration="20.984467782s" podCreationTimestamp="2026-03-12 15:11:55 +0000 UTC" firstStartedPulling="2026-03-12 15:11:56.689973116 +0000 UTC m=+1475.333987342" lastFinishedPulling="2026-03-12 15:12:15.13743947 +0000 UTC m=+1493.781453736" observedRunningTime="2026-03-12 15:12:15.976085452 +0000 UTC m=+1494.620099678" watchObservedRunningTime="2026-03-12 15:12:15.984467782 +0000 UTC m=+1494.628482008" Mar 12 15:12:48 crc kubenswrapper[4832]: I0312 15:12:48.212677 4832 scope.go:117] "RemoveContainer" containerID="02efc6bf364a734269c3047c758f10f7d5f3200ea8dc50d3f30efb39a906e879" Mar 12 15:12:48 crc kubenswrapper[4832]: I0312 15:12:48.268961 4832 scope.go:117] "RemoveContainer" containerID="a76b4353436bcc787f20f3a094c4f8c6bb17ae0f808781d031c2a091bf84b611" Mar 12 15:12:48 crc kubenswrapper[4832]: I0312 15:12:48.298994 4832 scope.go:117] "RemoveContainer" containerID="aabb2defc601f3d3797ce0b64b067866a91e85c962c904780f3178a960455189" Mar 12 15:12:48 crc kubenswrapper[4832]: I0312 15:12:48.343332 4832 scope.go:117] "RemoveContainer" containerID="b82a8c4b0b054a192a4cf5b804079aad3e000cffc5cd3cc5114fede40c2fa717" Mar 12 15:12:48 crc kubenswrapper[4832]: I0312 15:12:48.397545 4832 scope.go:117] "RemoveContainer" containerID="3e61315a6e94c39e4920a73312884690e4d5ad25bf55d03d3dd568e73238bcae" Mar 12 15:12:48 crc kubenswrapper[4832]: I0312 15:12:48.447657 4832 scope.go:117] "RemoveContainer" containerID="90b05e586bfcc3d7d46a9b7c6718bf3c89293b606da609d6fe24a423a89b17ba" Mar 12 15:12:48 crc kubenswrapper[4832]: I0312 15:12:48.492981 4832 scope.go:117] "RemoveContainer" containerID="623e5e290fe496c1e6164230ffeb2082353bd4876b6ba28a04c0485ad1005ed5" Mar 12 15:12:56 crc kubenswrapper[4832]: I0312 15:12:56.314475 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:12:56 crc kubenswrapper[4832]: I0312 15:12:56.314995 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:13:26 crc kubenswrapper[4832]: I0312 15:13:26.314715 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:13:26 crc kubenswrapper[4832]: I0312 15:13:26.315454 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.789201 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jndvf"] Mar 12 15:13:36 crc kubenswrapper[4832]: E0312 15:13:36.790171 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e" containerName="oc" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.790186 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e" containerName="oc" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.790424 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e" containerName="oc" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.792198 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.793973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hhv\" (UniqueName: \"kubernetes.io/projected/5b25c6db-d1f7-443f-81af-5133bbba7c85-kube-api-access-77hhv\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.794157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-utilities\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.794275 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-catalog-content\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.806691 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jndvf"] Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.896014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-utilities\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.896141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-catalog-content\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.896213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77hhv\" (UniqueName: \"kubernetes.io/projected/5b25c6db-d1f7-443f-81af-5133bbba7c85-kube-api-access-77hhv\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.896415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-utilities\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.896610 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-catalog-content\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:36 crc kubenswrapper[4832]: I0312 15:13:36.923725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77hhv\" (UniqueName: \"kubernetes.io/projected/5b25c6db-d1f7-443f-81af-5133bbba7c85-kube-api-access-77hhv\") pod \"certified-operators-jndvf\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:37 crc kubenswrapper[4832]: I0312 15:13:37.120686 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:37 crc kubenswrapper[4832]: I0312 15:13:37.618236 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jndvf"] Mar 12 15:13:37 crc kubenswrapper[4832]: I0312 15:13:37.930361 4832 generic.go:334] "Generic (PLEG): container finished" podID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerID="da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781" exitCode=0 Mar 12 15:13:37 crc kubenswrapper[4832]: I0312 15:13:37.930406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndvf" event={"ID":"5b25c6db-d1f7-443f-81af-5133bbba7c85","Type":"ContainerDied","Data":"da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781"} Mar 12 15:13:37 crc kubenswrapper[4832]: I0312 15:13:37.930654 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndvf" event={"ID":"5b25c6db-d1f7-443f-81af-5133bbba7c85","Type":"ContainerStarted","Data":"0f5503d8736bbd881cd332b8e2e1335fd6e1970e6257b6355a0666d0df532f05"} Mar 12 15:13:39 crc kubenswrapper[4832]: I0312 15:13:39.957552 4832 generic.go:334] "Generic (PLEG): container finished" podID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerID="5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f" exitCode=0 Mar 12 15:13:39 crc kubenswrapper[4832]: I0312 15:13:39.957694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndvf" event={"ID":"5b25c6db-d1f7-443f-81af-5133bbba7c85","Type":"ContainerDied","Data":"5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f"} Mar 12 15:13:40 crc kubenswrapper[4832]: I0312 15:13:40.972276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndvf" event={"ID":"5b25c6db-d1f7-443f-81af-5133bbba7c85","Type":"ContainerStarted","Data":"b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097"} Mar 12 15:13:41 crc kubenswrapper[4832]: I0312 15:13:41.005809 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jndvf" podStartSLOduration=2.572191723 podStartE2EDuration="5.005791872s" podCreationTimestamp="2026-03-12 15:13:36 +0000 UTC" firstStartedPulling="2026-03-12 15:13:37.932203467 +0000 UTC m=+1576.576217693" lastFinishedPulling="2026-03-12 15:13:40.365803616 +0000 UTC m=+1579.009817842" observedRunningTime="2026-03-12 15:13:41.00466706 +0000 UTC m=+1579.648681316" watchObservedRunningTime="2026-03-12 15:13:41.005791872 +0000 UTC m=+1579.649806108" Mar 12 15:13:47 crc kubenswrapper[4832]: I0312 15:13:47.121411 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:47 crc kubenswrapper[4832]: I0312 15:13:47.122045 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:47 crc kubenswrapper[4832]: I0312 15:13:47.183424 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:48 crc kubenswrapper[4832]: I0312 15:13:48.122989 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:48 crc kubenswrapper[4832]: I0312 15:13:48.194006 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jndvf"] Mar 12 15:13:48 crc kubenswrapper[4832]: I0312 15:13:48.635302 4832 scope.go:117] "RemoveContainer" containerID="ecce77dfc2c2cf511b6ae3e4dbd8a0809b952ff46a2082dc4deebb736927564a" Mar 12 15:13:48 crc kubenswrapper[4832]: I0312 15:13:48.671461 4832 scope.go:117] "RemoveContainer" containerID="5eefbfa7eb669d0a4c862a0ab44eb862143339915cf3bd65f0fcdebf03ea9b81" Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.087672 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jndvf" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="registry-server" containerID="cri-o://b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097" gracePeriod=2 Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.588597 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.773299 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77hhv\" (UniqueName: \"kubernetes.io/projected/5b25c6db-d1f7-443f-81af-5133bbba7c85-kube-api-access-77hhv\") pod \"5b25c6db-d1f7-443f-81af-5133bbba7c85\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.773399 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-catalog-content\") pod \"5b25c6db-d1f7-443f-81af-5133bbba7c85\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.773450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-utilities\") pod \"5b25c6db-d1f7-443f-81af-5133bbba7c85\" (UID: \"5b25c6db-d1f7-443f-81af-5133bbba7c85\") " Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.776636 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-utilities" (OuterVolumeSpecName: "utilities") pod "5b25c6db-d1f7-443f-81af-5133bbba7c85" (UID: "5b25c6db-d1f7-443f-81af-5133bbba7c85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.785855 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b25c6db-d1f7-443f-81af-5133bbba7c85-kube-api-access-77hhv" (OuterVolumeSpecName: "kube-api-access-77hhv") pod "5b25c6db-d1f7-443f-81af-5133bbba7c85" (UID: "5b25c6db-d1f7-443f-81af-5133bbba7c85"). InnerVolumeSpecName "kube-api-access-77hhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.862240 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b25c6db-d1f7-443f-81af-5133bbba7c85" (UID: "5b25c6db-d1f7-443f-81af-5133bbba7c85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.876611 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77hhv\" (UniqueName: \"kubernetes.io/projected/5b25c6db-d1f7-443f-81af-5133bbba7c85-kube-api-access-77hhv\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.876680 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:50 crc kubenswrapper[4832]: I0312 15:13:50.876708 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b25c6db-d1f7-443f-81af-5133bbba7c85-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.109287 4832 generic.go:334] "Generic (PLEG): container finished" podID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerID="b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097" exitCode=0 Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.109376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndvf" event={"ID":"5b25c6db-d1f7-443f-81af-5133bbba7c85","Type":"ContainerDied","Data":"b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097"} Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.109622 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndvf" event={"ID":"5b25c6db-d1f7-443f-81af-5133bbba7c85","Type":"ContainerDied","Data":"0f5503d8736bbd881cd332b8e2e1335fd6e1970e6257b6355a0666d0df532f05"} Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.109441 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndvf" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.109657 4832 scope.go:117] "RemoveContainer" containerID="b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.161929 4832 scope.go:117] "RemoveContainer" containerID="5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.164675 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jndvf"] Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.181000 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jndvf"] Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.201609 4832 scope.go:117] "RemoveContainer" containerID="da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.243148 4832 scope.go:117] "RemoveContainer" containerID="b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097" Mar 12 15:13:51 crc kubenswrapper[4832]: E0312 15:13:51.243781 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097\": container with ID starting with b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097 not found: ID does not exist" containerID="b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.243837 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097"} err="failed to get container status \"b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097\": rpc error: code = NotFound desc = could not find container \"b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097\": container with ID starting with b0a5343f18b809b1ce30f6e2bf0839022fb76f6b94a5d28e083df62629ebe097 not found: ID does not exist" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.243869 4832 scope.go:117] "RemoveContainer" containerID="5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f" Mar 12 15:13:51 crc kubenswrapper[4832]: E0312 15:13:51.244477 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f\": container with ID starting with 5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f not found: ID does not exist" containerID="5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.244538 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f"} err="failed to get container status \"5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f\": rpc error: code = NotFound desc = could not find container \"5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f\": container with ID starting with 5a0d16e17c54bb8927f83ea9c4db4b964aa02e7f1ed4592a7e78fee63e83971f not found: ID does not exist" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.244573 4832 scope.go:117] "RemoveContainer" containerID="da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781" Mar 12 15:13:51 crc kubenswrapper[4832]: E0312 15:13:51.244927 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781\": container with ID starting with da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781 not found: ID does not exist" containerID="da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781" Mar 12 15:13:51 crc kubenswrapper[4832]: I0312 15:13:51.244965 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781"} err="failed to get container status \"da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781\": rpc error: code = NotFound desc = could not find container \"da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781\": container with ID starting with da3c60454f840efe29b1cd3ea5d6656e28dc85b9e001c13b96a5a76ac13e8781 not found: ID does not exist" Mar 12 15:13:52 crc kubenswrapper[4832]: I0312 15:13:52.629902 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" path="/var/lib/kubelet/pods/5b25c6db-d1f7-443f-81af-5133bbba7c85/volumes" Mar 12 15:13:56 crc kubenswrapper[4832]: I0312 15:13:56.314594 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:13:56 crc kubenswrapper[4832]: I0312 15:13:56.315342 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:13:56 crc kubenswrapper[4832]: I0312 15:13:56.315418 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:13:56 crc kubenswrapper[4832]: I0312 15:13:56.316429 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:13:56 crc kubenswrapper[4832]: I0312 15:13:56.316570 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" gracePeriod=600 Mar 12 15:13:56 crc kubenswrapper[4832]: E0312 15:13:56.467786 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:13:57 crc kubenswrapper[4832]: I0312 15:13:57.189668 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" exitCode=0 Mar 12 15:13:57 crc kubenswrapper[4832]: I0312 15:13:57.189717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3"} Mar 12 15:13:57 crc kubenswrapper[4832]: I0312 15:13:57.190056 4832 scope.go:117] "RemoveContainer" containerID="ce9bdbc63a202a02b8c581188fe9e262cc502ac13b76d8ec31fd67aea15a9bba" Mar 12 15:13:57 crc kubenswrapper[4832]: I0312 15:13:57.190720 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:13:57 crc kubenswrapper[4832]: E0312 15:13:57.191015 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.163797 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555474-q5txp"] Mar 12 15:14:00 crc kubenswrapper[4832]: E0312 15:14:00.164893 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="extract-content" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.164909 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="extract-content" Mar 12 15:14:00 crc kubenswrapper[4832]: E0312 15:14:00.164940 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.164947 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4832]: E0312 15:14:00.164966 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="extract-utilities" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.164975 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="extract-utilities" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.165208 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b25c6db-d1f7-443f-81af-5133bbba7c85" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.166008 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-q5txp" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.168422 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.168873 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.169224 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.177746 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-q5txp"] Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.281714 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vckz9\" (UniqueName: \"kubernetes.io/projected/1d9129c0-c103-4451-aa21-684915e37eeb-kube-api-access-vckz9\") pod \"auto-csr-approver-29555474-q5txp\" (UID: \"1d9129c0-c103-4451-aa21-684915e37eeb\") " pod="openshift-infra/auto-csr-approver-29555474-q5txp" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.384200 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vckz9\" (UniqueName: \"kubernetes.io/projected/1d9129c0-c103-4451-aa21-684915e37eeb-kube-api-access-vckz9\") pod \"auto-csr-approver-29555474-q5txp\" (UID: \"1d9129c0-c103-4451-aa21-684915e37eeb\") " pod="openshift-infra/auto-csr-approver-29555474-q5txp" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.403399 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vckz9\" (UniqueName: \"kubernetes.io/projected/1d9129c0-c103-4451-aa21-684915e37eeb-kube-api-access-vckz9\") pod \"auto-csr-approver-29555474-q5txp\" (UID: \"1d9129c0-c103-4451-aa21-684915e37eeb\") " pod="openshift-infra/auto-csr-approver-29555474-q5txp" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.495771 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-q5txp" Mar 12 15:14:00 crc kubenswrapper[4832]: I0312 15:14:00.936586 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-q5txp"] Mar 12 15:14:01 crc kubenswrapper[4832]: I0312 15:14:01.232711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555474-q5txp" event={"ID":"1d9129c0-c103-4451-aa21-684915e37eeb","Type":"ContainerStarted","Data":"6a37e93fb8ed0be65175ee7d9194904864e8e652cafc930435c868f08621e9b4"} Mar 12 15:14:03 crc kubenswrapper[4832]: I0312 15:14:03.279203 4832 generic.go:334] "Generic (PLEG): container finished" podID="1d9129c0-c103-4451-aa21-684915e37eeb" containerID="07c7c634bdd9c7a9b17e685f6515bb01a0f0cc48d7e3922b2876195fb76617d4" exitCode=0 Mar 12 15:14:03 crc kubenswrapper[4832]: I0312 15:14:03.279517 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555474-q5txp" event={"ID":"1d9129c0-c103-4451-aa21-684915e37eeb","Type":"ContainerDied","Data":"07c7c634bdd9c7a9b17e685f6515bb01a0f0cc48d7e3922b2876195fb76617d4"} Mar 12 15:14:04 crc kubenswrapper[4832]: I0312 15:14:04.619383 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-q5txp" Mar 12 15:14:04 crc kubenswrapper[4832]: I0312 15:14:04.895380 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vckz9\" (UniqueName: \"kubernetes.io/projected/1d9129c0-c103-4451-aa21-684915e37eeb-kube-api-access-vckz9\") pod \"1d9129c0-c103-4451-aa21-684915e37eeb\" (UID: \"1d9129c0-c103-4451-aa21-684915e37eeb\") " Mar 12 15:14:04 crc kubenswrapper[4832]: I0312 15:14:04.901488 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9129c0-c103-4451-aa21-684915e37eeb-kube-api-access-vckz9" (OuterVolumeSpecName: "kube-api-access-vckz9") pod "1d9129c0-c103-4451-aa21-684915e37eeb" (UID: "1d9129c0-c103-4451-aa21-684915e37eeb"). InnerVolumeSpecName "kube-api-access-vckz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:14:04 crc kubenswrapper[4832]: I0312 15:14:04.998743 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vckz9\" (UniqueName: \"kubernetes.io/projected/1d9129c0-c103-4451-aa21-684915e37eeb-kube-api-access-vckz9\") on node \"crc\" DevicePath \"\"" Mar 12 15:14:05 crc kubenswrapper[4832]: I0312 15:14:05.303888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555474-q5txp" event={"ID":"1d9129c0-c103-4451-aa21-684915e37eeb","Type":"ContainerDied","Data":"6a37e93fb8ed0be65175ee7d9194904864e8e652cafc930435c868f08621e9b4"} Mar 12 15:14:05 crc kubenswrapper[4832]: I0312 15:14:05.303939 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a37e93fb8ed0be65175ee7d9194904864e8e652cafc930435c868f08621e9b4" Mar 12 15:14:05 crc kubenswrapper[4832]: I0312 15:14:05.303996 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-q5txp" Mar 12 15:14:05 crc kubenswrapper[4832]: I0312 15:14:05.687629 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-dnh7b"] Mar 12 15:14:05 crc kubenswrapper[4832]: I0312 15:14:05.695893 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-dnh7b"] Mar 12 15:14:06 crc kubenswrapper[4832]: I0312 15:14:06.634370 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa03d1b7-d97e-4806-86c9-8f77ce37f5bf" path="/var/lib/kubelet/pods/fa03d1b7-d97e-4806-86c9-8f77ce37f5bf/volumes" Mar 12 15:14:08 crc kubenswrapper[4832]: I0312 15:14:08.620606 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:14:08 crc kubenswrapper[4832]: E0312 15:14:08.621600 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:14:22 crc kubenswrapper[4832]: I0312 15:14:22.626596 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:14:22 crc kubenswrapper[4832]: E0312 15:14:22.627213 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:14:34 crc kubenswrapper[4832]: I0312 15:14:34.619468 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:14:34 crc kubenswrapper[4832]: E0312 15:14:34.620257 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:14:48 crc kubenswrapper[4832]: I0312 15:14:48.620533 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:14:48 crc kubenswrapper[4832]: E0312 15:14:48.621437 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:14:48 crc kubenswrapper[4832]: I0312 15:14:48.742574 4832 scope.go:117] "RemoveContainer" containerID="018dc5e56868d28a9fc5e0b53ebcad64eaf41e9c829b322f8f441040c6748a4f" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.159359 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99"] Mar 12 15:15:00 crc kubenswrapper[4832]: E0312 15:15:00.161336 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9129c0-c103-4451-aa21-684915e37eeb" containerName="oc" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.161365 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9129c0-c103-4451-aa21-684915e37eeb" containerName="oc" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.161664 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9129c0-c103-4451-aa21-684915e37eeb" containerName="oc" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.162719 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.167561 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.168107 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.178855 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99"] Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.273607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c702449-f836-49ad-8a42-531ec927c18d-secret-volume\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.273789 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvrdj\" (UniqueName: \"kubernetes.io/projected/3c702449-f836-49ad-8a42-531ec927c18d-kube-api-access-kvrdj\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.273828 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c702449-f836-49ad-8a42-531ec927c18d-config-volume\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.375461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c702449-f836-49ad-8a42-531ec927c18d-config-volume\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.376404 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c702449-f836-49ad-8a42-531ec927c18d-secret-volume\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.376651 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c702449-f836-49ad-8a42-531ec927c18d-config-volume\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.377205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvrdj\" (UniqueName: \"kubernetes.io/projected/3c702449-f836-49ad-8a42-531ec927c18d-kube-api-access-kvrdj\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.384869 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c702449-f836-49ad-8a42-531ec927c18d-secret-volume\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.793201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvrdj\" (UniqueName: \"kubernetes.io/projected/3c702449-f836-49ad-8a42-531ec927c18d-kube-api-access-kvrdj\") pod \"collect-profiles-29555475-jkn99\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:00 crc kubenswrapper[4832]: I0312 15:15:00.802005 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:01 crc kubenswrapper[4832]: I0312 15:15:01.264212 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99"] Mar 12 15:15:01 crc kubenswrapper[4832]: I0312 15:15:01.927006 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c702449-f836-49ad-8a42-531ec927c18d" containerID="5c0d08409d83e8bfec12c8f10c00a4360e4326470590751ff45e07a0c3e8d5d2" exitCode=0 Mar 12 15:15:01 crc kubenswrapper[4832]: I0312 15:15:01.927824 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" event={"ID":"3c702449-f836-49ad-8a42-531ec927c18d","Type":"ContainerDied","Data":"5c0d08409d83e8bfec12c8f10c00a4360e4326470590751ff45e07a0c3e8d5d2"} Mar 12 15:15:01 crc kubenswrapper[4832]: I0312 15:15:01.927856 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" event={"ID":"3c702449-f836-49ad-8a42-531ec927c18d","Type":"ContainerStarted","Data":"513f54c872c333d54320f24e6d7a7271d108caecec2d8c521a8e1adf90026d45"} Mar 12 15:15:02 crc kubenswrapper[4832]: I0312 15:15:02.626486 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:15:02 crc kubenswrapper[4832]: E0312 15:15:02.627027 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.301004 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.452587 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvrdj\" (UniqueName: \"kubernetes.io/projected/3c702449-f836-49ad-8a42-531ec927c18d-kube-api-access-kvrdj\") pod \"3c702449-f836-49ad-8a42-531ec927c18d\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.452881 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c702449-f836-49ad-8a42-531ec927c18d-config-volume\") pod \"3c702449-f836-49ad-8a42-531ec927c18d\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.452947 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c702449-f836-49ad-8a42-531ec927c18d-secret-volume\") pod \"3c702449-f836-49ad-8a42-531ec927c18d\" (UID: \"3c702449-f836-49ad-8a42-531ec927c18d\") " Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.453471 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c702449-f836-49ad-8a42-531ec927c18d-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c702449-f836-49ad-8a42-531ec927c18d" (UID: "3c702449-f836-49ad-8a42-531ec927c18d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.459963 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c702449-f836-49ad-8a42-531ec927c18d-kube-api-access-kvrdj" (OuterVolumeSpecName: "kube-api-access-kvrdj") pod "3c702449-f836-49ad-8a42-531ec927c18d" (UID: "3c702449-f836-49ad-8a42-531ec927c18d"). InnerVolumeSpecName "kube-api-access-kvrdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.460112 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c702449-f836-49ad-8a42-531ec927c18d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c702449-f836-49ad-8a42-531ec927c18d" (UID: "3c702449-f836-49ad-8a42-531ec927c18d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.555712 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c702449-f836-49ad-8a42-531ec927c18d-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.555797 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvrdj\" (UniqueName: \"kubernetes.io/projected/3c702449-f836-49ad-8a42-531ec927c18d-kube-api-access-kvrdj\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.555821 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c702449-f836-49ad-8a42-531ec927c18d-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.957760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" event={"ID":"3c702449-f836-49ad-8a42-531ec927c18d","Type":"ContainerDied","Data":"513f54c872c333d54320f24e6d7a7271d108caecec2d8c521a8e1adf90026d45"} Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.957815 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="513f54c872c333d54320f24e6d7a7271d108caecec2d8c521a8e1adf90026d45" Mar 12 15:15:03 crc kubenswrapper[4832]: I0312 15:15:03.957820 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-jkn99" Mar 12 15:15:09 crc kubenswrapper[4832]: I0312 15:15:09.009368 4832 generic.go:334] "Generic (PLEG): container finished" podID="4787eb10-18fc-4da2-98ce-246687619641" containerID="79d5b35656326e4892fac27992e277384c7a4d1a87438a19f7ee8980162093ef" exitCode=0 Mar 12 15:15:09 crc kubenswrapper[4832]: I0312 15:15:09.009443 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" event={"ID":"4787eb10-18fc-4da2-98ce-246687619641","Type":"ContainerDied","Data":"79d5b35656326e4892fac27992e277384c7a4d1a87438a19f7ee8980162093ef"} Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.778150 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.927918 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjr42\" (UniqueName: \"kubernetes.io/projected/4787eb10-18fc-4da2-98ce-246687619641-kube-api-access-kjr42\") pod \"4787eb10-18fc-4da2-98ce-246687619641\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.928091 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-inventory\") pod \"4787eb10-18fc-4da2-98ce-246687619641\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.928149 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-bootstrap-combined-ca-bundle\") pod \"4787eb10-18fc-4da2-98ce-246687619641\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.928336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-ssh-key-openstack-edpm-ipam\") pod \"4787eb10-18fc-4da2-98ce-246687619641\" (UID: \"4787eb10-18fc-4da2-98ce-246687619641\") " Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.934048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4787eb10-18fc-4da2-98ce-246687619641-kube-api-access-kjr42" (OuterVolumeSpecName: "kube-api-access-kjr42") pod "4787eb10-18fc-4da2-98ce-246687619641" (UID: "4787eb10-18fc-4da2-98ce-246687619641"). InnerVolumeSpecName "kube-api-access-kjr42". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.936810 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4787eb10-18fc-4da2-98ce-246687619641" (UID: "4787eb10-18fc-4da2-98ce-246687619641"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.954114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-inventory" (OuterVolumeSpecName: "inventory") pod "4787eb10-18fc-4da2-98ce-246687619641" (UID: "4787eb10-18fc-4da2-98ce-246687619641"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:15:10 crc kubenswrapper[4832]: I0312 15:15:10.956694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4787eb10-18fc-4da2-98ce-246687619641" (UID: "4787eb10-18fc-4da2-98ce-246687619641"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.030920 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" event={"ID":"4787eb10-18fc-4da2-98ce-246687619641","Type":"ContainerDied","Data":"247dfd97423ea703751027945993b4babdd44a3327de1db4d919a534f53685e9"} Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.030963 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247dfd97423ea703751027945993b4babdd44a3327de1db4d919a534f53685e9" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.031023 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.031896 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.031945 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.031971 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4787eb10-18fc-4da2-98ce-246687619641-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.031990 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjr42\" (UniqueName: \"kubernetes.io/projected/4787eb10-18fc-4da2-98ce-246687619641-kube-api-access-kjr42\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.522345 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh"] Mar 12 15:15:11 crc kubenswrapper[4832]: E0312 15:15:11.522804 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c702449-f836-49ad-8a42-531ec927c18d" containerName="collect-profiles" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.522826 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c702449-f836-49ad-8a42-531ec927c18d" containerName="collect-profiles" Mar 12 15:15:11 crc kubenswrapper[4832]: E0312 15:15:11.522852 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4787eb10-18fc-4da2-98ce-246687619641" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.522862 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4787eb10-18fc-4da2-98ce-246687619641" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.523090 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4787eb10-18fc-4da2-98ce-246687619641" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.523113 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c702449-f836-49ad-8a42-531ec927c18d" containerName="collect-profiles" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.523748 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.526107 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.526491 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.526674 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.529041 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.550982 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh"] Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.648411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.648496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdrnd\" (UniqueName: \"kubernetes.io/projected/aba57da9-d394-41df-a7ea-23344bad0e60-kube-api-access-wdrnd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.648589 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.751303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.751440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdrnd\" (UniqueName: \"kubernetes.io/projected/aba57da9-d394-41df-a7ea-23344bad0e60-kube-api-access-wdrnd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.752756 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.756764 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.758186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.785433 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdrnd\" (UniqueName: \"kubernetes.io/projected/aba57da9-d394-41df-a7ea-23344bad0e60-kube-api-access-wdrnd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9czwh\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:11 crc kubenswrapper[4832]: I0312 15:15:11.908883 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:15:12 crc kubenswrapper[4832]: I0312 15:15:12.444439 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh"] Mar 12 15:15:13 crc kubenswrapper[4832]: I0312 15:15:13.055882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" event={"ID":"aba57da9-d394-41df-a7ea-23344bad0e60","Type":"ContainerStarted","Data":"3b5bcb03b45b44a273a0f3128c0e76a2fc8a405b3196da2d63009837a922cc91"} Mar 12 15:15:14 crc kubenswrapper[4832]: I0312 15:15:14.065929 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" event={"ID":"aba57da9-d394-41df-a7ea-23344bad0e60","Type":"ContainerStarted","Data":"adfa445db3583797f1f232f79d7741c2a5672c1d8cefa540a18fb4ac25e2fbf4"} Mar 12 15:15:14 crc kubenswrapper[4832]: I0312 15:15:14.089633 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" podStartSLOduration=2.6282706940000002 podStartE2EDuration="3.089608519s" podCreationTimestamp="2026-03-12 15:15:11 +0000 UTC" firstStartedPulling="2026-03-12 15:15:12.454997651 +0000 UTC m=+1671.099011877" lastFinishedPulling="2026-03-12 15:15:12.916335486 +0000 UTC m=+1671.560349702" observedRunningTime="2026-03-12 15:15:14.084848404 +0000 UTC m=+1672.728862640" watchObservedRunningTime="2026-03-12 15:15:14.089608519 +0000 UTC m=+1672.733622745" Mar 12 15:15:14 crc kubenswrapper[4832]: I0312 15:15:14.620783 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:15:14 crc kubenswrapper[4832]: E0312 15:15:14.621327 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:15:28 crc kubenswrapper[4832]: I0312 15:15:28.619702 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:15:28 crc kubenswrapper[4832]: E0312 15:15:28.620708 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:15:33 crc kubenswrapper[4832]: I0312 15:15:33.754244 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l89zj"] Mar 12 15:15:33 crc kubenswrapper[4832]: I0312 15:15:33.756680 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:33 crc kubenswrapper[4832]: I0312 15:15:33.773765 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l89zj"] Mar 12 15:15:33 crc kubenswrapper[4832]: I0312 15:15:33.904460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-utilities\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:33 crc kubenswrapper[4832]: I0312 15:15:33.904561 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-catalog-content\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:33 crc kubenswrapper[4832]: I0312 15:15:33.904700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkss\" (UniqueName: \"kubernetes.io/projected/a72f732d-0342-4f9f-8e06-7cb5a4065db3-kube-api-access-jlkss\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.007137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlkss\" (UniqueName: \"kubernetes.io/projected/a72f732d-0342-4f9f-8e06-7cb5a4065db3-kube-api-access-jlkss\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.007320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-utilities\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.007932 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-utilities\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.008290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-catalog-content\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.008024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-catalog-content\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.037986 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlkss\" (UniqueName: \"kubernetes.io/projected/a72f732d-0342-4f9f-8e06-7cb5a4065db3-kube-api-access-jlkss\") pod \"redhat-marketplace-l89zj\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.073283 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:34 crc kubenswrapper[4832]: I0312 15:15:34.574356 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l89zj"] Mar 12 15:15:34 crc kubenswrapper[4832]: W0312 15:15:34.586809 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda72f732d_0342_4f9f_8e06_7cb5a4065db3.slice/crio-82af282108385ca53a9f8a8fb66dec776154a4d6b9b016665370679390fd9949 WatchSource:0}: Error finding container 82af282108385ca53a9f8a8fb66dec776154a4d6b9b016665370679390fd9949: Status 404 returned error can't find the container with id 82af282108385ca53a9f8a8fb66dec776154a4d6b9b016665370679390fd9949 Mar 12 15:15:35 crc kubenswrapper[4832]: I0312 15:15:35.282296 4832 generic.go:334] "Generic (PLEG): container finished" podID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerID="f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716" exitCode=0 Mar 12 15:15:35 crc kubenswrapper[4832]: I0312 15:15:35.282349 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l89zj" event={"ID":"a72f732d-0342-4f9f-8e06-7cb5a4065db3","Type":"ContainerDied","Data":"f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716"} Mar 12 15:15:35 crc kubenswrapper[4832]: I0312 15:15:35.282727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l89zj" event={"ID":"a72f732d-0342-4f9f-8e06-7cb5a4065db3","Type":"ContainerStarted","Data":"82af282108385ca53a9f8a8fb66dec776154a4d6b9b016665370679390fd9949"} Mar 12 15:15:40 crc kubenswrapper[4832]: I0312 15:15:40.326872 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l89zj" event={"ID":"a72f732d-0342-4f9f-8e06-7cb5a4065db3","Type":"ContainerStarted","Data":"bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02"} Mar 12 15:15:40 crc kubenswrapper[4832]: E0312 15:15:40.489459 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda72f732d_0342_4f9f_8e06_7cb5a4065db3.slice/crio-bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:15:40 crc kubenswrapper[4832]: I0312 15:15:40.620222 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:15:40 crc kubenswrapper[4832]: E0312 15:15:40.620762 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:15:41 crc kubenswrapper[4832]: I0312 15:15:41.337155 4832 generic.go:334] "Generic (PLEG): container finished" podID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerID="bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02" exitCode=0 Mar 12 15:15:41 crc kubenswrapper[4832]: I0312 15:15:41.337212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l89zj" event={"ID":"a72f732d-0342-4f9f-8e06-7cb5a4065db3","Type":"ContainerDied","Data":"bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02"} Mar 12 15:15:42 crc kubenswrapper[4832]: I0312 15:15:42.350376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l89zj" event={"ID":"a72f732d-0342-4f9f-8e06-7cb5a4065db3","Type":"ContainerStarted","Data":"e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22"} Mar 12 15:15:42 crc kubenswrapper[4832]: I0312 15:15:42.373201 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l89zj" podStartSLOduration=2.774687594 podStartE2EDuration="9.373184108s" podCreationTimestamp="2026-03-12 15:15:33 +0000 UTC" firstStartedPulling="2026-03-12 15:15:35.284134349 +0000 UTC m=+1693.928148595" lastFinishedPulling="2026-03-12 15:15:41.882630873 +0000 UTC m=+1700.526645109" observedRunningTime="2026-03-12 15:15:42.370138482 +0000 UTC m=+1701.014152718" watchObservedRunningTime="2026-03-12 15:15:42.373184108 +0000 UTC m=+1701.017198334" Mar 12 15:15:44 crc kubenswrapper[4832]: I0312 15:15:44.074085 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:44 crc kubenswrapper[4832]: I0312 15:15:44.074483 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:44 crc kubenswrapper[4832]: I0312 15:15:44.137229 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:48 crc kubenswrapper[4832]: I0312 15:15:48.833388 4832 scope.go:117] "RemoveContainer" containerID="f599e51cd1b0eee74251330c819e8c9a673b9d3b9ae501bedf6749891d02ae3a" Mar 12 15:15:48 crc kubenswrapper[4832]: I0312 15:15:48.871817 4832 scope.go:117] "RemoveContainer" containerID="81a62355b0c8fcc39c572b4588dfc2e81e705b8728d0fd9a4a509eb1fbee5beb" Mar 12 15:15:48 crc kubenswrapper[4832]: I0312 15:15:48.897974 4832 scope.go:117] "RemoveContainer" containerID="cbdfcce6e7c4ee5f88ff3159ee7c9808c05e702cf79858ba41ac125fec212e64" Mar 12 15:15:48 crc kubenswrapper[4832]: I0312 15:15:48.918307 4832 scope.go:117] "RemoveContainer" containerID="15547e582e9806db9422224d4f63bb6653271646c8776a2017fdd56cc6c4d73c" Mar 12 15:15:48 crc kubenswrapper[4832]: I0312 15:15:48.948375 4832 scope.go:117] "RemoveContainer" containerID="63e4d333d892723079b363f005c5cff5df106cee780c54a0b6af0bb9260adf70" Mar 12 15:15:51 crc kubenswrapper[4832]: I0312 15:15:51.620774 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:15:51 crc kubenswrapper[4832]: E0312 15:15:51.621639 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.159689 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.227562 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l89zj"] Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.472911 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l89zj" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="registry-server" containerID="cri-o://e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22" gracePeriod=2 Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.909980 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.965090 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlkss\" (UniqueName: \"kubernetes.io/projected/a72f732d-0342-4f9f-8e06-7cb5a4065db3-kube-api-access-jlkss\") pod \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.965282 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-utilities\") pod \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.965442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-catalog-content\") pod \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\" (UID: \"a72f732d-0342-4f9f-8e06-7cb5a4065db3\") " Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.966746 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-utilities" (OuterVolumeSpecName: "utilities") pod "a72f732d-0342-4f9f-8e06-7cb5a4065db3" (UID: "a72f732d-0342-4f9f-8e06-7cb5a4065db3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.970523 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72f732d-0342-4f9f-8e06-7cb5a4065db3-kube-api-access-jlkss" (OuterVolumeSpecName: "kube-api-access-jlkss") pod "a72f732d-0342-4f9f-8e06-7cb5a4065db3" (UID: "a72f732d-0342-4f9f-8e06-7cb5a4065db3"). InnerVolumeSpecName "kube-api-access-jlkss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:15:54 crc kubenswrapper[4832]: I0312 15:15:54.998757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a72f732d-0342-4f9f-8e06-7cb5a4065db3" (UID: "a72f732d-0342-4f9f-8e06-7cb5a4065db3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.054611 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jvlmr"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.064261 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c091-account-create-update-5bw46"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.067831 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.067864 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlkss\" (UniqueName: \"kubernetes.io/projected/a72f732d-0342-4f9f-8e06-7cb5a4065db3-kube-api-access-jlkss\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.067877 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72f732d-0342-4f9f-8e06-7cb5a4065db3-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.072224 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4cbc-account-create-update-q4tzf"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.080283 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-slgfx"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.088313 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jvlmr"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.096030 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c091-account-create-update-5bw46"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.104577 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4cbc-account-create-update-q4tzf"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.112787 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-slgfx"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.493169 4832 generic.go:334] "Generic (PLEG): container finished" podID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerID="e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22" exitCode=0 Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.493245 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l89zj" event={"ID":"a72f732d-0342-4f9f-8e06-7cb5a4065db3","Type":"ContainerDied","Data":"e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22"} Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.493259 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l89zj" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.493315 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l89zj" event={"ID":"a72f732d-0342-4f9f-8e06-7cb5a4065db3","Type":"ContainerDied","Data":"82af282108385ca53a9f8a8fb66dec776154a4d6b9b016665370679390fd9949"} Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.493349 4832 scope.go:117] "RemoveContainer" containerID="e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.554716 4832 scope.go:117] "RemoveContainer" containerID="bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.557107 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l89zj"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.570115 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l89zj"] Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.596275 4832 scope.go:117] "RemoveContainer" containerID="f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.635214 4832 scope.go:117] "RemoveContainer" containerID="e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22" Mar 12 15:15:55 crc kubenswrapper[4832]: E0312 15:15:55.635735 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22\": container with ID starting with e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22 not found: ID does not exist" containerID="e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.635784 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22"} err="failed to get container status \"e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22\": rpc error: code = NotFound desc = could not find container \"e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22\": container with ID starting with e0f52bd63cdf90133f96d78595835128c115cc5641c334dc34c563369b292d22 not found: ID does not exist" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.635813 4832 scope.go:117] "RemoveContainer" containerID="bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02" Mar 12 15:15:55 crc kubenswrapper[4832]: E0312 15:15:55.636117 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02\": container with ID starting with bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02 not found: ID does not exist" containerID="bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.636162 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02"} err="failed to get container status \"bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02\": rpc error: code = NotFound desc = could not find container \"bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02\": container with ID starting with bcd08202086861c0dabd5b3a6c0f2725998c385d78ad8095db41b16b36369a02 not found: ID does not exist" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.636189 4832 scope.go:117] "RemoveContainer" containerID="f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716" Mar 12 15:15:55 crc kubenswrapper[4832]: E0312 15:15:55.636680 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716\": container with ID starting with f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716 not found: ID does not exist" containerID="f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716" Mar 12 15:15:55 crc kubenswrapper[4832]: I0312 15:15:55.636715 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716"} err="failed to get container status \"f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716\": rpc error: code = NotFound desc = could not find container \"f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716\": container with ID starting with f039a27ff714822e2b974f0a5f93971d0d9ef487116f79deaeb0c78ed1d3e716 not found: ID does not exist" Mar 12 15:15:56 crc kubenswrapper[4832]: I0312 15:15:56.635856 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a982f3-9124-416e-b0b1-199a57462954" path="/var/lib/kubelet/pods/44a982f3-9124-416e-b0b1-199a57462954/volumes" Mar 12 15:15:56 crc kubenswrapper[4832]: I0312 15:15:56.637724 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3430c0-f2f1-48bb-ae3f-2337f5ea30de" path="/var/lib/kubelet/pods/6f3430c0-f2f1-48bb-ae3f-2337f5ea30de/volumes" Mar 12 15:15:56 crc kubenswrapper[4832]: I0312 15:15:56.639016 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" path="/var/lib/kubelet/pods/a72f732d-0342-4f9f-8e06-7cb5a4065db3/volumes" Mar 12 15:15:56 crc kubenswrapper[4832]: I0312 15:15:56.641691 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3e6885-9e7b-4a4a-b237-3233fcbe2129" path="/var/lib/kubelet/pods/bb3e6885-9e7b-4a4a-b237-3233fcbe2129/volumes" Mar 12 15:15:56 crc kubenswrapper[4832]: I0312 15:15:56.643451 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26b8772-ff30-45b4-a167-df95b7051fe3" path="/var/lib/kubelet/pods/c26b8772-ff30-45b4-a167-df95b7051fe3/volumes" Mar 12 15:15:57 crc kubenswrapper[4832]: I0312 15:15:57.047336 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e0bf-account-create-update-8zpsg"] Mar 12 15:15:57 crc kubenswrapper[4832]: I0312 15:15:57.070337 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pjxk7"] Mar 12 15:15:57 crc kubenswrapper[4832]: I0312 15:15:57.082181 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pjxk7"] Mar 12 15:15:57 crc kubenswrapper[4832]: I0312 15:15:57.090887 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e0bf-account-create-update-8zpsg"] Mar 12 15:15:58 crc kubenswrapper[4832]: I0312 15:15:58.629616 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae29670-d754-4a65-b982-146c9f8e8f59" path="/var/lib/kubelet/pods/3ae29670-d754-4a65-b982-146c9f8e8f59/volumes" Mar 12 15:15:58 crc kubenswrapper[4832]: I0312 15:15:58.630385 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e44c919-5e7d-4b58-85d7-919cd52679e2" path="/var/lib/kubelet/pods/9e44c919-5e7d-4b58-85d7-919cd52679e2/volumes" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.148947 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555476-swzzl"] Mar 12 15:16:00 crc kubenswrapper[4832]: E0312 15:16:00.149697 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="extract-utilities" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.149715 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="extract-utilities" Mar 12 15:16:00 crc kubenswrapper[4832]: E0312 15:16:00.149751 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="extract-content" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.149759 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="extract-content" Mar 12 15:16:00 crc kubenswrapper[4832]: E0312 15:16:00.149773 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="registry-server" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.149781 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="registry-server" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.150052 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72f732d-0342-4f9f-8e06-7cb5a4065db3" containerName="registry-server" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.150796 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-swzzl" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.154730 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.155314 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.155758 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.161335 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-swzzl"] Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.272068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvc6\" (UniqueName: \"kubernetes.io/projected/1c44e724-8877-4898-bb2b-d5acc63d0168-kube-api-access-5bvc6\") pod \"auto-csr-approver-29555476-swzzl\" (UID: \"1c44e724-8877-4898-bb2b-d5acc63d0168\") " pod="openshift-infra/auto-csr-approver-29555476-swzzl" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.375931 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvc6\" (UniqueName: \"kubernetes.io/projected/1c44e724-8877-4898-bb2b-d5acc63d0168-kube-api-access-5bvc6\") pod \"auto-csr-approver-29555476-swzzl\" (UID: \"1c44e724-8877-4898-bb2b-d5acc63d0168\") " pod="openshift-infra/auto-csr-approver-29555476-swzzl" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.398979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvc6\" (UniqueName: \"kubernetes.io/projected/1c44e724-8877-4898-bb2b-d5acc63d0168-kube-api-access-5bvc6\") pod \"auto-csr-approver-29555476-swzzl\" (UID: \"1c44e724-8877-4898-bb2b-d5acc63d0168\") " pod="openshift-infra/auto-csr-approver-29555476-swzzl" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.471182 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-swzzl" Mar 12 15:16:00 crc kubenswrapper[4832]: I0312 15:16:00.913945 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-swzzl"] Mar 12 15:16:01 crc kubenswrapper[4832]: I0312 15:16:01.563413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555476-swzzl" event={"ID":"1c44e724-8877-4898-bb2b-d5acc63d0168","Type":"ContainerStarted","Data":"8336b2fe108cb72212c7d544a321b7761edeb1430ffcca347ae605ca810cfbf2"} Mar 12 15:16:02 crc kubenswrapper[4832]: I0312 15:16:02.580332 4832 generic.go:334] "Generic (PLEG): container finished" podID="1c44e724-8877-4898-bb2b-d5acc63d0168" containerID="98ef35b8ebe8bff39e8497a69d2cc7e484b29a04905dbfd3d19a32d4efaf8df7" exitCode=0 Mar 12 15:16:02 crc kubenswrapper[4832]: I0312 15:16:02.580664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555476-swzzl" event={"ID":"1c44e724-8877-4898-bb2b-d5acc63d0168","Type":"ContainerDied","Data":"98ef35b8ebe8bff39e8497a69d2cc7e484b29a04905dbfd3d19a32d4efaf8df7"} Mar 12 15:16:03 crc kubenswrapper[4832]: I0312 15:16:03.619380 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:16:03 crc kubenswrapper[4832]: E0312 15:16:03.619868 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:16:03 crc kubenswrapper[4832]: I0312 15:16:03.883758 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-swzzl" Mar 12 15:16:03 crc kubenswrapper[4832]: I0312 15:16:03.957792 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bvc6\" (UniqueName: \"kubernetes.io/projected/1c44e724-8877-4898-bb2b-d5acc63d0168-kube-api-access-5bvc6\") pod \"1c44e724-8877-4898-bb2b-d5acc63d0168\" (UID: \"1c44e724-8877-4898-bb2b-d5acc63d0168\") " Mar 12 15:16:03 crc kubenswrapper[4832]: I0312 15:16:03.980044 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c44e724-8877-4898-bb2b-d5acc63d0168-kube-api-access-5bvc6" (OuterVolumeSpecName: "kube-api-access-5bvc6") pod "1c44e724-8877-4898-bb2b-d5acc63d0168" (UID: "1c44e724-8877-4898-bb2b-d5acc63d0168"). InnerVolumeSpecName "kube-api-access-5bvc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:16:04 crc kubenswrapper[4832]: I0312 15:16:04.060472 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bvc6\" (UniqueName: \"kubernetes.io/projected/1c44e724-8877-4898-bb2b-d5acc63d0168-kube-api-access-5bvc6\") on node \"crc\" DevicePath \"\"" Mar 12 15:16:04 crc kubenswrapper[4832]: I0312 15:16:04.603563 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555476-swzzl" event={"ID":"1c44e724-8877-4898-bb2b-d5acc63d0168","Type":"ContainerDied","Data":"8336b2fe108cb72212c7d544a321b7761edeb1430ffcca347ae605ca810cfbf2"} Mar 12 15:16:04 crc kubenswrapper[4832]: I0312 15:16:04.603883 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8336b2fe108cb72212c7d544a321b7761edeb1430ffcca347ae605ca810cfbf2" Mar 12 15:16:04 crc kubenswrapper[4832]: I0312 15:16:04.603686 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-swzzl" Mar 12 15:16:04 crc kubenswrapper[4832]: I0312 15:16:04.940002 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-brnp7"] Mar 12 15:16:04 crc kubenswrapper[4832]: I0312 15:16:04.949792 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-brnp7"] Mar 12 15:16:06 crc kubenswrapper[4832]: I0312 15:16:06.639191 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa1e044-9fc1-4e3b-a815-5a74d4d3913c" path="/var/lib/kubelet/pods/1fa1e044-9fc1-4e3b-a815-5a74d4d3913c/volumes" Mar 12 15:16:16 crc kubenswrapper[4832]: I0312 15:16:16.621103 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:16:16 crc kubenswrapper[4832]: E0312 15:16:16.621919 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:16:23 crc kubenswrapper[4832]: I0312 15:16:23.063142 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-w2psf"] Mar 12 15:16:23 crc kubenswrapper[4832]: I0312 15:16:23.074687 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-w2psf"] Mar 12 15:16:24 crc kubenswrapper[4832]: I0312 15:16:24.634445 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98264f95-8803-42bb-b985-359a3556a90c" path="/var/lib/kubelet/pods/98264f95-8803-42bb-b985-359a3556a90c/volumes" Mar 12 15:16:28 crc kubenswrapper[4832]: I0312 15:16:28.620757 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:16:28 crc kubenswrapper[4832]: E0312 15:16:28.622054 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.083750 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b608-account-create-update-6gg2k"] Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.099975 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-j7gh8"] Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.116164 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-slglh"] Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.126563 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-j7gh8"] Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.136219 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fdtv9"] Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.147053 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b608-account-create-update-6gg2k"] Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.155636 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-slglh"] Mar 12 15:16:31 crc kubenswrapper[4832]: I0312 15:16:31.162202 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fdtv9"] Mar 12 15:16:32 crc kubenswrapper[4832]: I0312 15:16:32.642913 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c43e56-caa8-4f19-8ca7-52f6b551b0e6" path="/var/lib/kubelet/pods/10c43e56-caa8-4f19-8ca7-52f6b551b0e6/volumes" Mar 12 15:16:32 crc kubenswrapper[4832]: I0312 15:16:32.644699 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53942e7f-8c9b-4762-8f00-1f382fa40da8" path="/var/lib/kubelet/pods/53942e7f-8c9b-4762-8f00-1f382fa40da8/volumes" Mar 12 15:16:32 crc kubenswrapper[4832]: I0312 15:16:32.646036 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a71c99-f30a-4882-94b5-fb73111f41c2" path="/var/lib/kubelet/pods/a2a71c99-f30a-4882-94b5-fb73111f41c2/volumes" Mar 12 15:16:32 crc kubenswrapper[4832]: I0312 15:16:32.647452 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3ac3c3-bdf7-47c3-9e81-679ea2c44155" path="/var/lib/kubelet/pods/aa3ac3c3-bdf7-47c3-9e81-679ea2c44155/volumes" Mar 12 15:16:34 crc kubenswrapper[4832]: I0312 15:16:34.037945 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8694-account-create-update-dcf4c"] Mar 12 15:16:34 crc kubenswrapper[4832]: I0312 15:16:34.048712 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8694-account-create-update-dcf4c"] Mar 12 15:16:34 crc kubenswrapper[4832]: I0312 15:16:34.059321 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-00a7-account-create-update-kzbhm"] Mar 12 15:16:34 crc kubenswrapper[4832]: I0312 15:16:34.067252 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-00a7-account-create-update-kzbhm"] Mar 12 15:16:34 crc kubenswrapper[4832]: I0312 15:16:34.642036 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0504dba2-4849-450f-b10c-9669184c2820" path="/var/lib/kubelet/pods/0504dba2-4849-450f-b10c-9669184c2820/volumes" Mar 12 15:16:34 crc kubenswrapper[4832]: I0312 15:16:34.643763 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be99adb-61ba-476f-94ad-7e6015445091" path="/var/lib/kubelet/pods/7be99adb-61ba-476f-94ad-7e6015445091/volumes" Mar 12 15:16:37 crc kubenswrapper[4832]: I0312 15:16:37.052417 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5zp7p"] Mar 12 15:16:37 crc kubenswrapper[4832]: I0312 15:16:37.073938 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5zp7p"] Mar 12 15:16:38 crc kubenswrapper[4832]: I0312 15:16:38.637393 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27df49fc-7f53-4145-97b1-9acdbb768496" path="/var/lib/kubelet/pods/27df49fc-7f53-4145-97b1-9acdbb768496/volumes" Mar 12 15:16:39 crc kubenswrapper[4832]: I0312 15:16:39.058911 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2pdk5"] Mar 12 15:16:39 crc kubenswrapper[4832]: I0312 15:16:39.071120 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2pdk5"] Mar 12 15:16:40 crc kubenswrapper[4832]: I0312 15:16:40.619848 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:16:40 crc kubenswrapper[4832]: E0312 15:16:40.620848 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:16:40 crc kubenswrapper[4832]: I0312 15:16:40.638675 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5" path="/var/lib/kubelet/pods/58dd55f1-b352-4a2f-a4c6-aa8c0aa34cc5/volumes" Mar 12 15:16:41 crc kubenswrapper[4832]: I0312 15:16:41.056620 4832 generic.go:334] "Generic (PLEG): container finished" podID="aba57da9-d394-41df-a7ea-23344bad0e60" containerID="adfa445db3583797f1f232f79d7741c2a5672c1d8cefa540a18fb4ac25e2fbf4" exitCode=0 Mar 12 15:16:41 crc kubenswrapper[4832]: I0312 15:16:41.056750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" event={"ID":"aba57da9-d394-41df-a7ea-23344bad0e60","Type":"ContainerDied","Data":"adfa445db3583797f1f232f79d7741c2a5672c1d8cefa540a18fb4ac25e2fbf4"} Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.617994 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.786800 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdrnd\" (UniqueName: \"kubernetes.io/projected/aba57da9-d394-41df-a7ea-23344bad0e60-kube-api-access-wdrnd\") pod \"aba57da9-d394-41df-a7ea-23344bad0e60\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.786959 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-inventory\") pod \"aba57da9-d394-41df-a7ea-23344bad0e60\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.787027 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-ssh-key-openstack-edpm-ipam\") pod \"aba57da9-d394-41df-a7ea-23344bad0e60\" (UID: \"aba57da9-d394-41df-a7ea-23344bad0e60\") " Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.793439 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba57da9-d394-41df-a7ea-23344bad0e60-kube-api-access-wdrnd" (OuterVolumeSpecName: "kube-api-access-wdrnd") pod "aba57da9-d394-41df-a7ea-23344bad0e60" (UID: "aba57da9-d394-41df-a7ea-23344bad0e60"). InnerVolumeSpecName "kube-api-access-wdrnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.816924 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aba57da9-d394-41df-a7ea-23344bad0e60" (UID: "aba57da9-d394-41df-a7ea-23344bad0e60"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.821488 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-inventory" (OuterVolumeSpecName: "inventory") pod "aba57da9-d394-41df-a7ea-23344bad0e60" (UID: "aba57da9-d394-41df-a7ea-23344bad0e60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.891000 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdrnd\" (UniqueName: \"kubernetes.io/projected/aba57da9-d394-41df-a7ea-23344bad0e60-kube-api-access-wdrnd\") on node \"crc\" DevicePath \"\"" Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.891055 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:16:42 crc kubenswrapper[4832]: I0312 15:16:42.891074 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aba57da9-d394-41df-a7ea-23344bad0e60-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.082749 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" event={"ID":"aba57da9-d394-41df-a7ea-23344bad0e60","Type":"ContainerDied","Data":"3b5bcb03b45b44a273a0f3128c0e76a2fc8a405b3196da2d63009837a922cc91"} Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.082803 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5bcb03b45b44a273a0f3128c0e76a2fc8a405b3196da2d63009837a922cc91" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.082871 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9czwh" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.181369 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg"] Mar 12 15:16:43 crc kubenswrapper[4832]: E0312 15:16:43.181777 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba57da9-d394-41df-a7ea-23344bad0e60" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.181794 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba57da9-d394-41df-a7ea-23344bad0e60" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 15:16:43 crc kubenswrapper[4832]: E0312 15:16:43.181821 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c44e724-8877-4898-bb2b-d5acc63d0168" containerName="oc" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.181833 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c44e724-8877-4898-bb2b-d5acc63d0168" containerName="oc" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.182018 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba57da9-d394-41df-a7ea-23344bad0e60" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.182051 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c44e724-8877-4898-bb2b-d5acc63d0168" containerName="oc" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.182674 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.184884 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.185227 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.185555 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.186443 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.190240 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg"] Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.299525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkzh\" (UniqueName: \"kubernetes.io/projected/0a779f8a-9311-43e6-add6-68e19f39aadd-kube-api-access-jhkzh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.299599 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.299664 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.401898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkzh\" (UniqueName: \"kubernetes.io/projected/0a779f8a-9311-43e6-add6-68e19f39aadd-kube-api-access-jhkzh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.402001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.402106 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.411120 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.413935 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.420809 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkzh\" (UniqueName: \"kubernetes.io/projected/0a779f8a-9311-43e6-add6-68e19f39aadd-kube-api-access-jhkzh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-drmzg\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:43 crc kubenswrapper[4832]: I0312 15:16:43.502918 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:16:44 crc kubenswrapper[4832]: I0312 15:16:44.045674 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg"] Mar 12 15:16:44 crc kubenswrapper[4832]: I0312 15:16:44.054343 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:16:44 crc kubenswrapper[4832]: I0312 15:16:44.094227 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" event={"ID":"0a779f8a-9311-43e6-add6-68e19f39aadd","Type":"ContainerStarted","Data":"5023d809b194e57d85b48d43f743f8c993d3044b5f568feac48afbe8c8e2868b"} Mar 12 15:16:45 crc kubenswrapper[4832]: I0312 15:16:45.111228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" event={"ID":"0a779f8a-9311-43e6-add6-68e19f39aadd","Type":"ContainerStarted","Data":"ac4af2a91d570b6ddba75f31d65fdac01108ee77314612f4920432a1ad4181ae"} Mar 12 15:16:45 crc kubenswrapper[4832]: I0312 15:16:45.134483 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" podStartSLOduration=1.642195005 podStartE2EDuration="2.13446121s" podCreationTimestamp="2026-03-12 15:16:43 +0000 UTC" firstStartedPulling="2026-03-12 15:16:44.054090531 +0000 UTC m=+1762.698104757" lastFinishedPulling="2026-03-12 15:16:44.546356726 +0000 UTC m=+1763.190370962" observedRunningTime="2026-03-12 15:16:45.128801499 +0000 UTC m=+1763.772815725" watchObservedRunningTime="2026-03-12 15:16:45.13446121 +0000 UTC m=+1763.778475436" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.027652 4832 scope.go:117] "RemoveContainer" containerID="c7ef3fce7bf4a3495d4155eb1878ea12e2faca82b721740b4821bf877012e0ee" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.074238 4832 scope.go:117] "RemoveContainer" containerID="78d04d8c56396efeb67f93c34934cb499fe5068f424cf7cf5e5a938808f6df74" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.129081 4832 scope.go:117] "RemoveContainer" containerID="7b9c130bc432728aedb0aa629352275e394c9a9b23a74a177a256d240d3d6b33" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.172792 4832 scope.go:117] "RemoveContainer" containerID="f09551b0d4eebac467ba24f2aaaceface48d97d0b642caaab285900ffa7761d1" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.228244 4832 scope.go:117] "RemoveContainer" containerID="5ec319f77b298eabd923f2465ee71737df381285e38544ac1ae0e2a410298850" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.263980 4832 scope.go:117] "RemoveContainer" containerID="b329b4b464e5842d3ebc0d53dbf11c0bf62ea86202ef39aeef2c06cfbc5ce3cf" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.314429 4832 scope.go:117] "RemoveContainer" containerID="e49f6b2a0e4a87b23acd73b768c7d9122768bdef97e443e1beac18bb6e396bfb" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.335944 4832 scope.go:117] "RemoveContainer" containerID="4d6c484a89af206a0bfb57072bb3fc0403a2da4308765b0e758cb6271072a3fd" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.372287 4832 scope.go:117] "RemoveContainer" containerID="7ab9e8f6ced3b89e3e743afc956f1d52bdf251fed28f38a83b18c1c477fd863c" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.394127 4832 scope.go:117] "RemoveContainer" containerID="b0d26359005d78b681abb9ff6bbc13cf71ec617e38c46b82e216ebdc093d29bd" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.428888 4832 scope.go:117] "RemoveContainer" containerID="6446febc605dfdb695dcd7263ca8a17196203857c35e846e96842e2bf9504d36" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.455000 4832 scope.go:117] "RemoveContainer" containerID="37da2e82dbc02e5d6021889807d0faa9781ec85b5448a46fa01edc9515ec1f54" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.484821 4832 scope.go:117] "RemoveContainer" containerID="612633c1c418b6fe62958f6e429b4c03a92d3a33f959bc5499f431c6949a2f68" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.505179 4832 scope.go:117] "RemoveContainer" containerID="5943410cce50b601e7ef4b212dbe01a2117d47c37961351321f07668805e5e5e" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.522756 4832 scope.go:117] "RemoveContainer" containerID="45c26146034a839c32ae2ef84930b9b9cf07c7cc02a278cf63059e116f822c28" Mar 12 15:16:49 crc kubenswrapper[4832]: I0312 15:16:49.553922 4832 scope.go:117] "RemoveContainer" containerID="bcfa915cd490c630db2abc2c0e37dad1a1b7815453d1739fb01f6570e26fc849" Mar 12 15:16:55 crc kubenswrapper[4832]: I0312 15:16:55.619777 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:16:55 crc kubenswrapper[4832]: E0312 15:16:55.620735 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:17:08 crc kubenswrapper[4832]: I0312 15:17:08.620410 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:17:08 crc kubenswrapper[4832]: E0312 15:17:08.621588 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:17:10 crc kubenswrapper[4832]: I0312 15:17:10.060033 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sf9s6"] Mar 12 15:17:10 crc kubenswrapper[4832]: I0312 15:17:10.071899 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sf9s6"] Mar 12 15:17:10 crc kubenswrapper[4832]: I0312 15:17:10.634416 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b2fcd2-d826-44f8-a9e1-125b17905fae" path="/var/lib/kubelet/pods/40b2fcd2-d826-44f8-a9e1-125b17905fae/volumes" Mar 12 15:17:18 crc kubenswrapper[4832]: I0312 15:17:18.052778 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6psc2"] Mar 12 15:17:18 crc kubenswrapper[4832]: I0312 15:17:18.070446 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-pz5qf"] Mar 12 15:17:18 crc kubenswrapper[4832]: I0312 15:17:18.079078 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6psc2"] Mar 12 15:17:18 crc kubenswrapper[4832]: I0312 15:17:18.090092 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-pz5qf"] Mar 12 15:17:18 crc kubenswrapper[4832]: I0312 15:17:18.639136 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdded1bd-9b32-465d-9226-618cf5d0e8bb" path="/var/lib/kubelet/pods/bdded1bd-9b32-465d-9226-618cf5d0e8bb/volumes" Mar 12 15:17:18 crc kubenswrapper[4832]: I0312 15:17:18.640328 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1282c2b-8bd5-4beb-a929-b86c1ae950a6" path="/var/lib/kubelet/pods/f1282c2b-8bd5-4beb-a929-b86c1ae950a6/volumes" Mar 12 15:17:23 crc kubenswrapper[4832]: I0312 15:17:23.620533 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:17:23 crc kubenswrapper[4832]: E0312 15:17:23.621890 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:17:31 crc kubenswrapper[4832]: I0312 15:17:31.041755 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-brwnc"] Mar 12 15:17:31 crc kubenswrapper[4832]: I0312 15:17:31.056759 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-brwnc"] Mar 12 15:17:32 crc kubenswrapper[4832]: I0312 15:17:32.634234 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f2adafe-55f5-4149-893d-bdf63ec5ef7d" path="/var/lib/kubelet/pods/6f2adafe-55f5-4149-893d-bdf63ec5ef7d/volumes" Mar 12 15:17:33 crc kubenswrapper[4832]: I0312 15:17:33.027269 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2hkzk"] Mar 12 15:17:33 crc kubenswrapper[4832]: I0312 15:17:33.038818 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2hkzk"] Mar 12 15:17:34 crc kubenswrapper[4832]: I0312 15:17:34.977264 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7d0054-4697-4cbb-bc50-18024fc3bfbc" path="/var/lib/kubelet/pods/8a7d0054-4697-4cbb-bc50-18024fc3bfbc/volumes" Mar 12 15:17:35 crc kubenswrapper[4832]: I0312 15:17:35.619927 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:17:35 crc kubenswrapper[4832]: E0312 15:17:35.620594 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:17:47 crc kubenswrapper[4832]: I0312 15:17:47.620698 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:17:47 crc kubenswrapper[4832]: E0312 15:17:47.621537 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:17:49 crc kubenswrapper[4832]: I0312 15:17:49.874268 4832 scope.go:117] "RemoveContainer" containerID="125e7aa951be5b777ab1984fa78a73bf6e1be246e5d0e99f98120952d5b3fcce" Mar 12 15:17:49 crc kubenswrapper[4832]: I0312 15:17:49.925489 4832 scope.go:117] "RemoveContainer" containerID="0acff1ea10ef3df7b6acf3e00e19b2bd07b63360c8b001c9bb8ced8793dc927a" Mar 12 15:17:50 crc kubenswrapper[4832]: I0312 15:17:50.012027 4832 scope.go:117] "RemoveContainer" containerID="bb0fe606493cdd02c1840105af702a1ece7476b3cd77e86712d7eeef567cff6b" Mar 12 15:17:50 crc kubenswrapper[4832]: I0312 15:17:50.069804 4832 scope.go:117] "RemoveContainer" containerID="5ce37d8893e2ae7f44f7ebba50d87cf8c314028503b58201f32899347b55e391" Mar 12 15:17:50 crc kubenswrapper[4832]: I0312 15:17:50.112534 4832 scope.go:117] "RemoveContainer" containerID="27d3c0f53caf72eaced680d7ed078fe786c01365274749f744a9ae440320d658" Mar 12 15:17:55 crc kubenswrapper[4832]: I0312 15:17:55.243264 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a779f8a-9311-43e6-add6-68e19f39aadd" containerID="ac4af2a91d570b6ddba75f31d65fdac01108ee77314612f4920432a1ad4181ae" exitCode=0 Mar 12 15:17:55 crc kubenswrapper[4832]: I0312 15:17:55.243350 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" event={"ID":"0a779f8a-9311-43e6-add6-68e19f39aadd","Type":"ContainerDied","Data":"ac4af2a91d570b6ddba75f31d65fdac01108ee77314612f4920432a1ad4181ae"} Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.762967 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.837086 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhkzh\" (UniqueName: \"kubernetes.io/projected/0a779f8a-9311-43e6-add6-68e19f39aadd-kube-api-access-jhkzh\") pod \"0a779f8a-9311-43e6-add6-68e19f39aadd\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.837288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-ssh-key-openstack-edpm-ipam\") pod \"0a779f8a-9311-43e6-add6-68e19f39aadd\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.837443 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-inventory\") pod \"0a779f8a-9311-43e6-add6-68e19f39aadd\" (UID: \"0a779f8a-9311-43e6-add6-68e19f39aadd\") " Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.859635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a779f8a-9311-43e6-add6-68e19f39aadd-kube-api-access-jhkzh" (OuterVolumeSpecName: "kube-api-access-jhkzh") pod "0a779f8a-9311-43e6-add6-68e19f39aadd" (UID: "0a779f8a-9311-43e6-add6-68e19f39aadd"). InnerVolumeSpecName "kube-api-access-jhkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.882779 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a779f8a-9311-43e6-add6-68e19f39aadd" (UID: "0a779f8a-9311-43e6-add6-68e19f39aadd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.886086 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-inventory" (OuterVolumeSpecName: "inventory") pod "0a779f8a-9311-43e6-add6-68e19f39aadd" (UID: "0a779f8a-9311-43e6-add6-68e19f39aadd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.940160 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhkzh\" (UniqueName: \"kubernetes.io/projected/0a779f8a-9311-43e6-add6-68e19f39aadd-kube-api-access-jhkzh\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.940208 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:56 crc kubenswrapper[4832]: I0312 15:17:56.940222 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a779f8a-9311-43e6-add6-68e19f39aadd-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.269890 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" event={"ID":"0a779f8a-9311-43e6-add6-68e19f39aadd","Type":"ContainerDied","Data":"5023d809b194e57d85b48d43f743f8c993d3044b5f568feac48afbe8c8e2868b"} Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.270255 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5023d809b194e57d85b48d43f743f8c993d3044b5f568feac48afbe8c8e2868b" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.269994 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-drmzg" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.404848 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp"] Mar 12 15:17:57 crc kubenswrapper[4832]: E0312 15:17:57.405528 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a779f8a-9311-43e6-add6-68e19f39aadd" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.405640 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a779f8a-9311-43e6-add6-68e19f39aadd" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.406050 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a779f8a-9311-43e6-add6-68e19f39aadd" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.407144 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.409786 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.410278 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.410880 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.411192 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.423874 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp"] Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.553141 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.553284 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8g6z\" (UniqueName: \"kubernetes.io/projected/5de93141-455a-41ad-8137-2f14127035f7-kube-api-access-h8g6z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.553624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.655282 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.655428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8g6z\" (UniqueName: \"kubernetes.io/projected/5de93141-455a-41ad-8137-2f14127035f7-kube-api-access-h8g6z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.655815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.664876 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.695762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.702967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8g6z\" (UniqueName: \"kubernetes.io/projected/5de93141-455a-41ad-8137-2f14127035f7-kube-api-access-h8g6z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:57 crc kubenswrapper[4832]: I0312 15:17:57.736903 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:17:58 crc kubenswrapper[4832]: I0312 15:17:58.334797 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp"] Mar 12 15:17:59 crc kubenswrapper[4832]: I0312 15:17:59.289538 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" event={"ID":"5de93141-455a-41ad-8137-2f14127035f7","Type":"ContainerStarted","Data":"0abc4baa806f7dcdb742a4fd5e91b1c9608814c00d096ff9de147dae3e441111"} Mar 12 15:17:59 crc kubenswrapper[4832]: I0312 15:17:59.290166 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" event={"ID":"5de93141-455a-41ad-8137-2f14127035f7","Type":"ContainerStarted","Data":"af42848e0e1f13b7dad230b818a3ddf0c886313aea28363f11490e67ed29f2d9"} Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.137132 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" podStartSLOduration=2.639413122 podStartE2EDuration="3.137110322s" podCreationTimestamp="2026-03-12 15:17:57 +0000 UTC" firstStartedPulling="2026-03-12 15:17:58.342265071 +0000 UTC m=+1836.986279307" lastFinishedPulling="2026-03-12 15:17:58.839962241 +0000 UTC m=+1837.483976507" observedRunningTime="2026-03-12 15:17:59.303162969 +0000 UTC m=+1837.947177195" watchObservedRunningTime="2026-03-12 15:18:00.137110322 +0000 UTC m=+1838.781124558" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.141292 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555478-5rzlh"] Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.142802 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.146035 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.146120 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.148984 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.162385 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-5rzlh"] Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.207728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdfj\" (UniqueName: \"kubernetes.io/projected/d8580854-c938-48ab-828c-406427eab926-kube-api-access-zvdfj\") pod \"auto-csr-approver-29555478-5rzlh\" (UID: \"d8580854-c938-48ab-828c-406427eab926\") " pod="openshift-infra/auto-csr-approver-29555478-5rzlh" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.310878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvdfj\" (UniqueName: \"kubernetes.io/projected/d8580854-c938-48ab-828c-406427eab926-kube-api-access-zvdfj\") pod \"auto-csr-approver-29555478-5rzlh\" (UID: \"d8580854-c938-48ab-828c-406427eab926\") " pod="openshift-infra/auto-csr-approver-29555478-5rzlh" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.341048 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvdfj\" (UniqueName: \"kubernetes.io/projected/d8580854-c938-48ab-828c-406427eab926-kube-api-access-zvdfj\") pod \"auto-csr-approver-29555478-5rzlh\" (UID: \"d8580854-c938-48ab-828c-406427eab926\") " pod="openshift-infra/auto-csr-approver-29555478-5rzlh" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.477422 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.620572 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:18:00 crc kubenswrapper[4832]: E0312 15:18:00.621190 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:18:00 crc kubenswrapper[4832]: I0312 15:18:00.934822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-5rzlh"] Mar 12 15:18:00 crc kubenswrapper[4832]: W0312 15:18:00.949932 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8580854_c938_48ab_828c_406427eab926.slice/crio-6fc40240ed0e4f460e388ad545ca8e1903357d9375cbbc299519268795128274 WatchSource:0}: Error finding container 6fc40240ed0e4f460e388ad545ca8e1903357d9375cbbc299519268795128274: Status 404 returned error can't find the container with id 6fc40240ed0e4f460e388ad545ca8e1903357d9375cbbc299519268795128274 Mar 12 15:18:01 crc kubenswrapper[4832]: I0312 15:18:01.310164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" event={"ID":"d8580854-c938-48ab-828c-406427eab926","Type":"ContainerStarted","Data":"6fc40240ed0e4f460e388ad545ca8e1903357d9375cbbc299519268795128274"} Mar 12 15:18:02 crc kubenswrapper[4832]: I0312 15:18:02.326646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" event={"ID":"d8580854-c938-48ab-828c-406427eab926","Type":"ContainerStarted","Data":"f3e4beab8740363aa3eb579b9b5db21575d6dc3e5327c5917080f9c5626c1a56"} Mar 12 15:18:02 crc kubenswrapper[4832]: I0312 15:18:02.352386 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" podStartSLOduration=1.326820484 podStartE2EDuration="2.352358591s" podCreationTimestamp="2026-03-12 15:18:00 +0000 UTC" firstStartedPulling="2026-03-12 15:18:00.953942748 +0000 UTC m=+1839.597956994" lastFinishedPulling="2026-03-12 15:18:01.979480865 +0000 UTC m=+1840.623495101" observedRunningTime="2026-03-12 15:18:02.339822304 +0000 UTC m=+1840.983836570" watchObservedRunningTime="2026-03-12 15:18:02.352358591 +0000 UTC m=+1840.996372857" Mar 12 15:18:03 crc kubenswrapper[4832]: I0312 15:18:03.342724 4832 generic.go:334] "Generic (PLEG): container finished" podID="d8580854-c938-48ab-828c-406427eab926" containerID="f3e4beab8740363aa3eb579b9b5db21575d6dc3e5327c5917080f9c5626c1a56" exitCode=0 Mar 12 15:18:03 crc kubenswrapper[4832]: I0312 15:18:03.342796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" event={"ID":"d8580854-c938-48ab-828c-406427eab926","Type":"ContainerDied","Data":"f3e4beab8740363aa3eb579b9b5db21575d6dc3e5327c5917080f9c5626c1a56"} Mar 12 15:18:04 crc kubenswrapper[4832]: I0312 15:18:04.360025 4832 generic.go:334] "Generic (PLEG): container finished" podID="5de93141-455a-41ad-8137-2f14127035f7" containerID="0abc4baa806f7dcdb742a4fd5e91b1c9608814c00d096ff9de147dae3e441111" exitCode=0 Mar 12 15:18:04 crc kubenswrapper[4832]: I0312 15:18:04.360148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" event={"ID":"5de93141-455a-41ad-8137-2f14127035f7","Type":"ContainerDied","Data":"0abc4baa806f7dcdb742a4fd5e91b1c9608814c00d096ff9de147dae3e441111"} Mar 12 15:18:04 crc kubenswrapper[4832]: I0312 15:18:04.737301 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" Mar 12 15:18:04 crc kubenswrapper[4832]: I0312 15:18:04.814459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvdfj\" (UniqueName: \"kubernetes.io/projected/d8580854-c938-48ab-828c-406427eab926-kube-api-access-zvdfj\") pod \"d8580854-c938-48ab-828c-406427eab926\" (UID: \"d8580854-c938-48ab-828c-406427eab926\") " Mar 12 15:18:04 crc kubenswrapper[4832]: I0312 15:18:04.820470 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8580854-c938-48ab-828c-406427eab926-kube-api-access-zvdfj" (OuterVolumeSpecName: "kube-api-access-zvdfj") pod "d8580854-c938-48ab-828c-406427eab926" (UID: "d8580854-c938-48ab-828c-406427eab926"). InnerVolumeSpecName "kube-api-access-zvdfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:18:04 crc kubenswrapper[4832]: I0312 15:18:04.918331 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvdfj\" (UniqueName: \"kubernetes.io/projected/d8580854-c938-48ab-828c-406427eab926-kube-api-access-zvdfj\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.375998 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" event={"ID":"d8580854-c938-48ab-828c-406427eab926","Type":"ContainerDied","Data":"6fc40240ed0e4f460e388ad545ca8e1903357d9375cbbc299519268795128274"} Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.376036 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-5rzlh" Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.376056 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fc40240ed0e4f460e388ad545ca8e1903357d9375cbbc299519268795128274" Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.440451 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-hl7l5"] Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.451266 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-hl7l5"] Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.810835 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.972649 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-inventory\") pod \"5de93141-455a-41ad-8137-2f14127035f7\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.972771 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-ssh-key-openstack-edpm-ipam\") pod \"5de93141-455a-41ad-8137-2f14127035f7\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.972880 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8g6z\" (UniqueName: \"kubernetes.io/projected/5de93141-455a-41ad-8137-2f14127035f7-kube-api-access-h8g6z\") pod \"5de93141-455a-41ad-8137-2f14127035f7\" (UID: \"5de93141-455a-41ad-8137-2f14127035f7\") " Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.993240 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de93141-455a-41ad-8137-2f14127035f7-kube-api-access-h8g6z" (OuterVolumeSpecName: "kube-api-access-h8g6z") pod "5de93141-455a-41ad-8137-2f14127035f7" (UID: "5de93141-455a-41ad-8137-2f14127035f7"). InnerVolumeSpecName "kube-api-access-h8g6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.999051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-inventory" (OuterVolumeSpecName: "inventory") pod "5de93141-455a-41ad-8137-2f14127035f7" (UID: "5de93141-455a-41ad-8137-2f14127035f7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:18:05 crc kubenswrapper[4832]: I0312 15:18:05.999523 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5de93141-455a-41ad-8137-2f14127035f7" (UID: "5de93141-455a-41ad-8137-2f14127035f7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.074724 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.074754 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5de93141-455a-41ad-8137-2f14127035f7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.074768 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8g6z\" (UniqueName: \"kubernetes.io/projected/5de93141-455a-41ad-8137-2f14127035f7-kube-api-access-h8g6z\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.391021 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" event={"ID":"5de93141-455a-41ad-8137-2f14127035f7","Type":"ContainerDied","Data":"af42848e0e1f13b7dad230b818a3ddf0c886313aea28363f11490e67ed29f2d9"} Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.391082 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af42848e0e1f13b7dad230b818a3ddf0c886313aea28363f11490e67ed29f2d9" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.391129 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.562927 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h"] Mar 12 15:18:06 crc kubenswrapper[4832]: E0312 15:18:06.563379 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de93141-455a-41ad-8137-2f14127035f7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.563394 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de93141-455a-41ad-8137-2f14127035f7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 15:18:06 crc kubenswrapper[4832]: E0312 15:18:06.563424 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8580854-c938-48ab-828c-406427eab926" containerName="oc" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.563430 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8580854-c938-48ab-828c-406427eab926" containerName="oc" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.563666 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8580854-c938-48ab-828c-406427eab926" containerName="oc" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.563685 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de93141-455a-41ad-8137-2f14127035f7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.564427 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.567354 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.567851 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.568746 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.569266 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.575648 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h"] Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.634158 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e" path="/var/lib/kubelet/pods/ee13ce0c-3ffb-44a8-8b1a-ea0ef7d5944e/volumes" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.691080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.691136 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.691211 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczwm\" (UniqueName: \"kubernetes.io/projected/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-kube-api-access-mczwm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.792429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.792490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.792593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczwm\" (UniqueName: \"kubernetes.io/projected/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-kube-api-access-mczwm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.797179 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.803487 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.819626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczwm\" (UniqueName: \"kubernetes.io/projected/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-kube-api-access-mczwm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ppl8h\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:06 crc kubenswrapper[4832]: I0312 15:18:06.898477 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:07 crc kubenswrapper[4832]: I0312 15:18:07.459998 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h"] Mar 12 15:18:08 crc kubenswrapper[4832]: I0312 15:18:08.420247 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" event={"ID":"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9","Type":"ContainerStarted","Data":"d1ad0973fee7976e23a7521d671cd7a9ced24dd703708b7cb2db3d52f70abbd4"} Mar 12 15:18:08 crc kubenswrapper[4832]: I0312 15:18:08.420654 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" event={"ID":"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9","Type":"ContainerStarted","Data":"af430d73bba4ea0aa56c21f79b1d1265fa0907699b5d15b0339bed863c909f18"} Mar 12 15:18:08 crc kubenswrapper[4832]: I0312 15:18:08.452317 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" podStartSLOduration=2.061516266 podStartE2EDuration="2.452289081s" podCreationTimestamp="2026-03-12 15:18:06 +0000 UTC" firstStartedPulling="2026-03-12 15:18:07.457032535 +0000 UTC m=+1846.101046771" lastFinishedPulling="2026-03-12 15:18:07.84780533 +0000 UTC m=+1846.491819586" observedRunningTime="2026-03-12 15:18:08.442214304 +0000 UTC m=+1847.086228560" watchObservedRunningTime="2026-03-12 15:18:08.452289081 +0000 UTC m=+1847.096303337" Mar 12 15:18:11 crc kubenswrapper[4832]: I0312 15:18:11.620728 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:18:11 crc kubenswrapper[4832]: E0312 15:18:11.622000 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:18:23 crc kubenswrapper[4832]: I0312 15:18:23.619359 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:18:23 crc kubenswrapper[4832]: E0312 15:18:23.620744 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.057436 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tpjpw"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.071321 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mddvp"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.081543 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hkl8h"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.091982 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-10f7-account-create-update-mvmhr"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.101813 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tpjpw"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.109791 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hkl8h"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.117674 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-10f7-account-create-update-mvmhr"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.126195 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mddvp"] Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.631693 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a6d52a-8c4b-4d33-a56f-3173bf227728" path="/var/lib/kubelet/pods/15a6d52a-8c4b-4d33-a56f-3173bf227728/volumes" Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.632413 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d35b44d-107c-41c3-bbb4-02d9059167e5" path="/var/lib/kubelet/pods/4d35b44d-107c-41c3-bbb4-02d9059167e5/volumes" Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.632983 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea781ce-2058-4f49-b8c5-b0886379887a" path="/var/lib/kubelet/pods/6ea781ce-2058-4f49-b8c5-b0886379887a/volumes" Mar 12 15:18:28 crc kubenswrapper[4832]: I0312 15:18:28.633564 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b1af4f2-c6be-4256-9a21-5df02b6e04c7" path="/var/lib/kubelet/pods/9b1af4f2-c6be-4256-9a21-5df02b6e04c7/volumes" Mar 12 15:18:29 crc kubenswrapper[4832]: I0312 15:18:29.028878 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-13fb-account-create-update-qgfdt"] Mar 12 15:18:29 crc kubenswrapper[4832]: I0312 15:18:29.039278 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-13fb-account-create-update-qgfdt"] Mar 12 15:18:29 crc kubenswrapper[4832]: I0312 15:18:29.048609 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3211-account-create-update-g55fm"] Mar 12 15:18:29 crc kubenswrapper[4832]: I0312 15:18:29.059202 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3211-account-create-update-g55fm"] Mar 12 15:18:30 crc kubenswrapper[4832]: I0312 15:18:30.632635 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1922dec6-71fa-4ea1-94da-69bb83431a82" path="/var/lib/kubelet/pods/1922dec6-71fa-4ea1-94da-69bb83431a82/volumes" Mar 12 15:18:30 crc kubenswrapper[4832]: I0312 15:18:30.634353 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb14d699-d295-4ed3-ab30-a16b79ec7d94" path="/var/lib/kubelet/pods/cb14d699-d295-4ed3-ab30-a16b79ec7d94/volumes" Mar 12 15:18:34 crc kubenswrapper[4832]: I0312 15:18:34.619663 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:18:34 crc kubenswrapper[4832]: E0312 15:18:34.620265 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:18:45 crc kubenswrapper[4832]: E0312 15:18:45.165106 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc18e0a_c12a_49bc_bcb2_335ac6922bc9.slice/crio-d1ad0973fee7976e23a7521d671cd7a9ced24dd703708b7cb2db3d52f70abbd4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc18e0a_c12a_49bc_bcb2_335ac6922bc9.slice/crio-conmon-d1ad0973fee7976e23a7521d671cd7a9ced24dd703708b7cb2db3d52f70abbd4.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:18:45 crc kubenswrapper[4832]: I0312 15:18:45.816577 4832 generic.go:334] "Generic (PLEG): container finished" podID="5bc18e0a-c12a-49bc-bcb2-335ac6922bc9" containerID="d1ad0973fee7976e23a7521d671cd7a9ced24dd703708b7cb2db3d52f70abbd4" exitCode=0 Mar 12 15:18:45 crc kubenswrapper[4832]: I0312 15:18:45.816640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" event={"ID":"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9","Type":"ContainerDied","Data":"d1ad0973fee7976e23a7521d671cd7a9ced24dd703708b7cb2db3d52f70abbd4"} Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.275047 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.369834 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-inventory\") pod \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.369929 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mczwm\" (UniqueName: \"kubernetes.io/projected/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-kube-api-access-mczwm\") pod \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.370052 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-ssh-key-openstack-edpm-ipam\") pod \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\" (UID: \"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9\") " Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.377686 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-kube-api-access-mczwm" (OuterVolumeSpecName: "kube-api-access-mczwm") pod "5bc18e0a-c12a-49bc-bcb2-335ac6922bc9" (UID: "5bc18e0a-c12a-49bc-bcb2-335ac6922bc9"). InnerVolumeSpecName "kube-api-access-mczwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.403970 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-inventory" (OuterVolumeSpecName: "inventory") pod "5bc18e0a-c12a-49bc-bcb2-335ac6922bc9" (UID: "5bc18e0a-c12a-49bc-bcb2-335ac6922bc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.427098 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5bc18e0a-c12a-49bc-bcb2-335ac6922bc9" (UID: "5bc18e0a-c12a-49bc-bcb2-335ac6922bc9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.472077 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mczwm\" (UniqueName: \"kubernetes.io/projected/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-kube-api-access-mczwm\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.472106 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.472117 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bc18e0a-c12a-49bc-bcb2-335ac6922bc9-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.846350 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" event={"ID":"5bc18e0a-c12a-49bc-bcb2-335ac6922bc9","Type":"ContainerDied","Data":"af430d73bba4ea0aa56c21f79b1d1265fa0907699b5d15b0339bed863c909f18"} Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.846404 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af430d73bba4ea0aa56c21f79b1d1265fa0907699b5d15b0339bed863c909f18" Mar 12 15:18:47 crc kubenswrapper[4832]: I0312 15:18:47.846876 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ppl8h" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.003326 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv"] Mar 12 15:18:48 crc kubenswrapper[4832]: E0312 15:18:48.003707 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc18e0a-c12a-49bc-bcb2-335ac6922bc9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.003720 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc18e0a-c12a-49bc-bcb2-335ac6922bc9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.003917 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc18e0a-c12a-49bc-bcb2-335ac6922bc9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.004522 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.008408 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.008649 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.008786 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.009370 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.027223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv"] Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.186537 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.186661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.186865 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj96q\" (UniqueName: \"kubernetes.io/projected/27b4a258-8985-4bec-a0a5-d024cd4e9f55-kube-api-access-wj96q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.288495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.288839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.288899 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj96q\" (UniqueName: \"kubernetes.io/projected/27b4a258-8985-4bec-a0a5-d024cd4e9f55-kube-api-access-wj96q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.300592 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.300639 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.311645 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj96q\" (UniqueName: \"kubernetes.io/projected/27b4a258-8985-4bec-a0a5-d024cd4e9f55-kube-api-access-wj96q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.325056 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.692128 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv"] Mar 12 15:18:48 crc kubenswrapper[4832]: W0312 15:18:48.696206 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27b4a258_8985_4bec_a0a5_d024cd4e9f55.slice/crio-bc358256323ae0dd9a7911d3b8f6845fda0d262bcf2cbc66185278c8ffebd3ea WatchSource:0}: Error finding container bc358256323ae0dd9a7911d3b8f6845fda0d262bcf2cbc66185278c8ffebd3ea: Status 404 returned error can't find the container with id bc358256323ae0dd9a7911d3b8f6845fda0d262bcf2cbc66185278c8ffebd3ea Mar 12 15:18:48 crc kubenswrapper[4832]: I0312 15:18:48.863305 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" event={"ID":"27b4a258-8985-4bec-a0a5-d024cd4e9f55","Type":"ContainerStarted","Data":"bc358256323ae0dd9a7911d3b8f6845fda0d262bcf2cbc66185278c8ffebd3ea"} Mar 12 15:18:49 crc kubenswrapper[4832]: I0312 15:18:49.620128 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:18:49 crc kubenswrapper[4832]: E0312 15:18:49.620653 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:18:49 crc kubenswrapper[4832]: I0312 15:18:49.878286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" event={"ID":"27b4a258-8985-4bec-a0a5-d024cd4e9f55","Type":"ContainerStarted","Data":"b0ea896bf343880cf29d1dbafc74945850b38cd765d1b3dc6729329b74c32a04"} Mar 12 15:18:49 crc kubenswrapper[4832]: I0312 15:18:49.901229 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" podStartSLOduration=2.437084986 podStartE2EDuration="2.901207027s" podCreationTimestamp="2026-03-12 15:18:47 +0000 UTC" firstStartedPulling="2026-03-12 15:18:48.698103001 +0000 UTC m=+1887.342117227" lastFinishedPulling="2026-03-12 15:18:49.162224992 +0000 UTC m=+1887.806239268" observedRunningTime="2026-03-12 15:18:49.897146661 +0000 UTC m=+1888.541160927" watchObservedRunningTime="2026-03-12 15:18:49.901207027 +0000 UTC m=+1888.545221253" Mar 12 15:18:50 crc kubenswrapper[4832]: I0312 15:18:50.288248 4832 scope.go:117] "RemoveContainer" containerID="0ab8f76bdd7141ee772ca8f1c3b8b3c341cae1b5a641fad21588306f601568e0" Mar 12 15:18:50 crc kubenswrapper[4832]: I0312 15:18:50.319896 4832 scope.go:117] "RemoveContainer" containerID="d55a330d4b6f2b8bfa546ce46cb612294af1d148558f7f10e6b46aeb94c7b4f4" Mar 12 15:18:50 crc kubenswrapper[4832]: I0312 15:18:50.357235 4832 scope.go:117] "RemoveContainer" containerID="c920565b45e7ec88ac6065e59f36fca1b7ae24f8614a5544b0dc53dea1af349a" Mar 12 15:18:50 crc kubenswrapper[4832]: I0312 15:18:50.404177 4832 scope.go:117] "RemoveContainer" containerID="0cc0de4a635656a0ffb465ab66ddc9bd6a66ef1dc6f546e13d578aa8aca59ed8" Mar 12 15:18:50 crc kubenswrapper[4832]: I0312 15:18:50.438955 4832 scope.go:117] "RemoveContainer" containerID="902d3a6df6a2599a51d88a82269d66a67ebc93d0e997290c7c04db15d19e5905" Mar 12 15:18:50 crc kubenswrapper[4832]: I0312 15:18:50.483494 4832 scope.go:117] "RemoveContainer" containerID="34bdd30b51e980a430daa5ba4a185519ba7c893c0d23ad30fddb2d795513f930" Mar 12 15:18:50 crc kubenswrapper[4832]: I0312 15:18:50.519830 4832 scope.go:117] "RemoveContainer" containerID="b583147b9bfe2a872bc35bec47be57571b5c6846bae1277a496b67de764dd0e8" Mar 12 15:18:54 crc kubenswrapper[4832]: I0312 15:18:54.049160 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rclwl"] Mar 12 15:18:54 crc kubenswrapper[4832]: I0312 15:18:54.062796 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rclwl"] Mar 12 15:18:54 crc kubenswrapper[4832]: I0312 15:18:54.633608 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a680707-4129-474a-8c85-5395a10c821b" path="/var/lib/kubelet/pods/0a680707-4129-474a-8c85-5395a10c821b/volumes" Mar 12 15:19:01 crc kubenswrapper[4832]: I0312 15:19:01.620060 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:19:02 crc kubenswrapper[4832]: I0312 15:19:02.006822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"8770e7b9c1d71d91f69b085e39e97d28156baef5e8fd0f0f7513dd17570fa130"} Mar 12 15:19:16 crc kubenswrapper[4832]: I0312 15:19:16.044247 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-v4wwj"] Mar 12 15:19:16 crc kubenswrapper[4832]: I0312 15:19:16.051590 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-v4wwj"] Mar 12 15:19:16 crc kubenswrapper[4832]: I0312 15:19:16.635723 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b521e10b-cf39-49fe-9078-bdc3e8f87a5d" path="/var/lib/kubelet/pods/b521e10b-cf39-49fe-9078-bdc3e8f87a5d/volumes" Mar 12 15:19:17 crc kubenswrapper[4832]: I0312 15:19:17.036926 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9msl5"] Mar 12 15:19:17 crc kubenswrapper[4832]: I0312 15:19:17.046351 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9msl5"] Mar 12 15:19:18 crc kubenswrapper[4832]: I0312 15:19:18.635697 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b08aa8-c1bf-431c-b101-3c8ece4cd2d4" path="/var/lib/kubelet/pods/07b08aa8-c1bf-431c-b101-3c8ece4cd2d4/volumes" Mar 12 15:19:34 crc kubenswrapper[4832]: I0312 15:19:34.368433 4832 generic.go:334] "Generic (PLEG): container finished" podID="27b4a258-8985-4bec-a0a5-d024cd4e9f55" containerID="b0ea896bf343880cf29d1dbafc74945850b38cd765d1b3dc6729329b74c32a04" exitCode=0 Mar 12 15:19:34 crc kubenswrapper[4832]: I0312 15:19:34.368546 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" event={"ID":"27b4a258-8985-4bec-a0a5-d024cd4e9f55","Type":"ContainerDied","Data":"b0ea896bf343880cf29d1dbafc74945850b38cd765d1b3dc6729329b74c32a04"} Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.781458 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.889715 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj96q\" (UniqueName: \"kubernetes.io/projected/27b4a258-8985-4bec-a0a5-d024cd4e9f55-kube-api-access-wj96q\") pod \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.890028 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-inventory\") pod \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.890085 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-ssh-key-openstack-edpm-ipam\") pod \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\" (UID: \"27b4a258-8985-4bec-a0a5-d024cd4e9f55\") " Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.895039 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b4a258-8985-4bec-a0a5-d024cd4e9f55-kube-api-access-wj96q" (OuterVolumeSpecName: "kube-api-access-wj96q") pod "27b4a258-8985-4bec-a0a5-d024cd4e9f55" (UID: "27b4a258-8985-4bec-a0a5-d024cd4e9f55"). InnerVolumeSpecName "kube-api-access-wj96q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.938286 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-inventory" (OuterVolumeSpecName: "inventory") pod "27b4a258-8985-4bec-a0a5-d024cd4e9f55" (UID: "27b4a258-8985-4bec-a0a5-d024cd4e9f55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.953654 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27b4a258-8985-4bec-a0a5-d024cd4e9f55" (UID: "27b4a258-8985-4bec-a0a5-d024cd4e9f55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.992473 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj96q\" (UniqueName: \"kubernetes.io/projected/27b4a258-8985-4bec-a0a5-d024cd4e9f55-kube-api-access-wj96q\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.992544 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:35 crc kubenswrapper[4832]: I0312 15:19:35.992559 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27b4a258-8985-4bec-a0a5-d024cd4e9f55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.385142 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" event={"ID":"27b4a258-8985-4bec-a0a5-d024cd4e9f55","Type":"ContainerDied","Data":"bc358256323ae0dd9a7911d3b8f6845fda0d262bcf2cbc66185278c8ffebd3ea"} Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.385182 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc358256323ae0dd9a7911d3b8f6845fda0d262bcf2cbc66185278c8ffebd3ea" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.385422 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.477207 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wscmn"] Mar 12 15:19:36 crc kubenswrapper[4832]: E0312 15:19:36.477647 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b4a258-8985-4bec-a0a5-d024cd4e9f55" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.477665 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b4a258-8985-4bec-a0a5-d024cd4e9f55" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.477868 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b4a258-8985-4bec-a0a5-d024cd4e9f55" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.478487 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.481257 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.481597 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.481762 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.482373 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:19:36 crc kubenswrapper[4832]: E0312 15:19:36.490629 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27b4a258_8985_4bec_a0a5_d024cd4e9f55.slice/crio-bc358256323ae0dd9a7911d3b8f6845fda0d262bcf2cbc66185278c8ffebd3ea\": RecentStats: unable to find data in memory cache]" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.497614 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wscmn"] Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.602175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.602228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.602342 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfhxb\" (UniqueName: \"kubernetes.io/projected/92375ce9-aeef-48ed-885d-7b648497c2b5-kube-api-access-bfhxb\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.703875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfhxb\" (UniqueName: \"kubernetes.io/projected/92375ce9-aeef-48ed-885d-7b648497c2b5-kube-api-access-bfhxb\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.704292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.704394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.710775 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.710813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.723525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfhxb\" (UniqueName: \"kubernetes.io/projected/92375ce9-aeef-48ed-885d-7b648497c2b5-kube-api-access-bfhxb\") pod \"ssh-known-hosts-edpm-deployment-wscmn\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:36 crc kubenswrapper[4832]: I0312 15:19:36.799550 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:37 crc kubenswrapper[4832]: I0312 15:19:37.349337 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wscmn"] Mar 12 15:19:37 crc kubenswrapper[4832]: I0312 15:19:37.396902 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" event={"ID":"92375ce9-aeef-48ed-885d-7b648497c2b5","Type":"ContainerStarted","Data":"0bf42e809c500ed6b9788a50b73f7c927c1221f49da4bb09eb9c9f81c51f9a6c"} Mar 12 15:19:38 crc kubenswrapper[4832]: I0312 15:19:38.407408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" event={"ID":"92375ce9-aeef-48ed-885d-7b648497c2b5","Type":"ContainerStarted","Data":"524a890a44d5a50db6ee2c8239cdc227691f23d0bc57b23758fde7b173ebd45e"} Mar 12 15:19:38 crc kubenswrapper[4832]: I0312 15:19:38.424563 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" podStartSLOduration=1.9421505510000001 podStartE2EDuration="2.424549512s" podCreationTimestamp="2026-03-12 15:19:36 +0000 UTC" firstStartedPulling="2026-03-12 15:19:37.356959215 +0000 UTC m=+1936.000973441" lastFinishedPulling="2026-03-12 15:19:37.839358176 +0000 UTC m=+1936.483372402" observedRunningTime="2026-03-12 15:19:38.424422769 +0000 UTC m=+1937.068436995" watchObservedRunningTime="2026-03-12 15:19:38.424549512 +0000 UTC m=+1937.068563738" Mar 12 15:19:44 crc kubenswrapper[4832]: I0312 15:19:44.463426 4832 generic.go:334] "Generic (PLEG): container finished" podID="92375ce9-aeef-48ed-885d-7b648497c2b5" containerID="524a890a44d5a50db6ee2c8239cdc227691f23d0bc57b23758fde7b173ebd45e" exitCode=0 Mar 12 15:19:44 crc kubenswrapper[4832]: I0312 15:19:44.463520 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" event={"ID":"92375ce9-aeef-48ed-885d-7b648497c2b5","Type":"ContainerDied","Data":"524a890a44d5a50db6ee2c8239cdc227691f23d0bc57b23758fde7b173ebd45e"} Mar 12 15:19:45 crc kubenswrapper[4832]: I0312 15:19:45.984305 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.091541 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfhxb\" (UniqueName: \"kubernetes.io/projected/92375ce9-aeef-48ed-885d-7b648497c2b5-kube-api-access-bfhxb\") pod \"92375ce9-aeef-48ed-885d-7b648497c2b5\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.091644 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-inventory-0\") pod \"92375ce9-aeef-48ed-885d-7b648497c2b5\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.091819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-ssh-key-openstack-edpm-ipam\") pod \"92375ce9-aeef-48ed-885d-7b648497c2b5\" (UID: \"92375ce9-aeef-48ed-885d-7b648497c2b5\") " Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.103642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92375ce9-aeef-48ed-885d-7b648497c2b5-kube-api-access-bfhxb" (OuterVolumeSpecName: "kube-api-access-bfhxb") pod "92375ce9-aeef-48ed-885d-7b648497c2b5" (UID: "92375ce9-aeef-48ed-885d-7b648497c2b5"). InnerVolumeSpecName "kube-api-access-bfhxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.118076 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "92375ce9-aeef-48ed-885d-7b648497c2b5" (UID: "92375ce9-aeef-48ed-885d-7b648497c2b5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.126315 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92375ce9-aeef-48ed-885d-7b648497c2b5" (UID: "92375ce9-aeef-48ed-885d-7b648497c2b5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.194226 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfhxb\" (UniqueName: \"kubernetes.io/projected/92375ce9-aeef-48ed-885d-7b648497c2b5-kube-api-access-bfhxb\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.194267 4832 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.194280 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92375ce9-aeef-48ed-885d-7b648497c2b5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.482813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" event={"ID":"92375ce9-aeef-48ed-885d-7b648497c2b5","Type":"ContainerDied","Data":"0bf42e809c500ed6b9788a50b73f7c927c1221f49da4bb09eb9c9f81c51f9a6c"} Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.482848 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf42e809c500ed6b9788a50b73f7c927c1221f49da4bb09eb9c9f81c51f9a6c" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.482857 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wscmn" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.571875 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd"] Mar 12 15:19:46 crc kubenswrapper[4832]: E0312 15:19:46.572339 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92375ce9-aeef-48ed-885d-7b648497c2b5" containerName="ssh-known-hosts-edpm-deployment" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.572364 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92375ce9-aeef-48ed-885d-7b648497c2b5" containerName="ssh-known-hosts-edpm-deployment" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.572616 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="92375ce9-aeef-48ed-885d-7b648497c2b5" containerName="ssh-known-hosts-edpm-deployment" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.573325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.575393 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.575481 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.577046 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.577154 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.590383 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd"] Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.703091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.703355 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.703562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ttc\" (UniqueName: \"kubernetes.io/projected/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-kube-api-access-s2ttc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.805606 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.805703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.805795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ttc\" (UniqueName: \"kubernetes.io/projected/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-kube-api-access-s2ttc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.810113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.829490 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.830802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ttc\" (UniqueName: \"kubernetes.io/projected/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-kube-api-access-s2ttc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bn6rd\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:46 crc kubenswrapper[4832]: I0312 15:19:46.892235 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:47 crc kubenswrapper[4832]: I0312 15:19:47.480260 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd"] Mar 12 15:19:47 crc kubenswrapper[4832]: I0312 15:19:47.493083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" event={"ID":"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24","Type":"ContainerStarted","Data":"9d393b795cd31afd4244b8aed6b4f87460d059111ce2c3eaec93a9184c1ae530"} Mar 12 15:19:48 crc kubenswrapper[4832]: I0312 15:19:48.503613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" event={"ID":"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24","Type":"ContainerStarted","Data":"724dc08236c2494a027d47af45384a162ce2dac1c04e212df5fd3a95a458202d"} Mar 12 15:19:48 crc kubenswrapper[4832]: I0312 15:19:48.530183 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" podStartSLOduration=2.073908494 podStartE2EDuration="2.530165381s" podCreationTimestamp="2026-03-12 15:19:46 +0000 UTC" firstStartedPulling="2026-03-12 15:19:47.480016329 +0000 UTC m=+1946.124030565" lastFinishedPulling="2026-03-12 15:19:47.936273226 +0000 UTC m=+1946.580287452" observedRunningTime="2026-03-12 15:19:48.524440258 +0000 UTC m=+1947.168454494" watchObservedRunningTime="2026-03-12 15:19:48.530165381 +0000 UTC m=+1947.174179607" Mar 12 15:19:50 crc kubenswrapper[4832]: I0312 15:19:50.677708 4832 scope.go:117] "RemoveContainer" containerID="169c415db7f0ad30b7fdc288c6dcf7e820d7deb0aee5b2dcfc21821fa0bbc1c4" Mar 12 15:19:50 crc kubenswrapper[4832]: I0312 15:19:50.756063 4832 scope.go:117] "RemoveContainer" containerID="8226049b28f79ac6c9138c7fa3d3159af15506b7d3452f3276893082570b29b5" Mar 12 15:19:50 crc kubenswrapper[4832]: I0312 15:19:50.824768 4832 scope.go:117] "RemoveContainer" containerID="0237b3eedd9b067551e433623895b3095d12df6e6f9e67aeb892067a6bee98d6" Mar 12 15:19:56 crc kubenswrapper[4832]: I0312 15:19:56.575463 4832 generic.go:334] "Generic (PLEG): container finished" podID="9b1bf168-48a0-44f0-a01a-ada8aa0fbb24" containerID="724dc08236c2494a027d47af45384a162ce2dac1c04e212df5fd3a95a458202d" exitCode=0 Mar 12 15:19:56 crc kubenswrapper[4832]: I0312 15:19:56.575627 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" event={"ID":"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24","Type":"ContainerDied","Data":"724dc08236c2494a027d47af45384a162ce2dac1c04e212df5fd3a95a458202d"} Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.113903 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.264718 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-inventory\") pod \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.264892 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2ttc\" (UniqueName: \"kubernetes.io/projected/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-kube-api-access-s2ttc\") pod \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.264942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-ssh-key-openstack-edpm-ipam\") pod \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\" (UID: \"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24\") " Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.270286 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-kube-api-access-s2ttc" (OuterVolumeSpecName: "kube-api-access-s2ttc") pod "9b1bf168-48a0-44f0-a01a-ada8aa0fbb24" (UID: "9b1bf168-48a0-44f0-a01a-ada8aa0fbb24"). InnerVolumeSpecName "kube-api-access-s2ttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.296084 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b1bf168-48a0-44f0-a01a-ada8aa0fbb24" (UID: "9b1bf168-48a0-44f0-a01a-ada8aa0fbb24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.304323 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-inventory" (OuterVolumeSpecName: "inventory") pod "9b1bf168-48a0-44f0-a01a-ada8aa0fbb24" (UID: "9b1bf168-48a0-44f0-a01a-ada8aa0fbb24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.366930 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2ttc\" (UniqueName: \"kubernetes.io/projected/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-kube-api-access-s2ttc\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.366973 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.366989 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b1bf168-48a0-44f0-a01a-ada8aa0fbb24-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.594173 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" event={"ID":"9b1bf168-48a0-44f0-a01a-ada8aa0fbb24","Type":"ContainerDied","Data":"9d393b795cd31afd4244b8aed6b4f87460d059111ce2c3eaec93a9184c1ae530"} Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.594206 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d393b795cd31afd4244b8aed6b4f87460d059111ce2c3eaec93a9184c1ae530" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.594296 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bn6rd" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.667966 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82"] Mar 12 15:19:58 crc kubenswrapper[4832]: E0312 15:19:58.668316 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1bf168-48a0-44f0-a01a-ada8aa0fbb24" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.668333 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1bf168-48a0-44f0-a01a-ada8aa0fbb24" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.668526 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1bf168-48a0-44f0-a01a-ada8aa0fbb24" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.670643 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.675674 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.675684 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.675800 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.675917 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.682680 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82"] Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.773333 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.773449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.773618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crk25\" (UniqueName: \"kubernetes.io/projected/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-kube-api-access-crk25\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.875333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crk25\" (UniqueName: \"kubernetes.io/projected/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-kube-api-access-crk25\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.875488 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.875642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.881924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.882142 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.903205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crk25\" (UniqueName: \"kubernetes.io/projected/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-kube-api-access-crk25\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.967205 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5ptct"] Mar 12 15:19:58 crc kubenswrapper[4832]: I0312 15:19:58.969407 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:58.978357 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ptct"] Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:58.988454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.078767 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42w8\" (UniqueName: \"kubernetes.io/projected/ce400e6f-fdd2-48e7-98bf-bc68d986e829-kube-api-access-v42w8\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.078890 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce400e6f-fdd2-48e7-98bf-bc68d986e829-utilities\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.078944 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce400e6f-fdd2-48e7-98bf-bc68d986e829-catalog-content\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.180796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce400e6f-fdd2-48e7-98bf-bc68d986e829-utilities\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.181153 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce400e6f-fdd2-48e7-98bf-bc68d986e829-catalog-content\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.181286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42w8\" (UniqueName: \"kubernetes.io/projected/ce400e6f-fdd2-48e7-98bf-bc68d986e829-kube-api-access-v42w8\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.181310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce400e6f-fdd2-48e7-98bf-bc68d986e829-utilities\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.181986 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce400e6f-fdd2-48e7-98bf-bc68d986e829-catalog-content\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.210550 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42w8\" (UniqueName: \"kubernetes.io/projected/ce400e6f-fdd2-48e7-98bf-bc68d986e829-kube-api-access-v42w8\") pod \"community-operators-5ptct\" (UID: \"ce400e6f-fdd2-48e7-98bf-bc68d986e829\") " pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.417364 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.605316 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82"] Mar 12 15:19:59 crc kubenswrapper[4832]: W0312 15:19:59.892137 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce400e6f_fdd2_48e7_98bf_bc68d986e829.slice/crio-47ecf13eba728c2255346df72028f63588d8c653105c93363ee373194219f4f2 WatchSource:0}: Error finding container 47ecf13eba728c2255346df72028f63588d8c653105c93363ee373194219f4f2: Status 404 returned error can't find the container with id 47ecf13eba728c2255346df72028f63588d8c653105c93363ee373194219f4f2 Mar 12 15:19:59 crc kubenswrapper[4832]: I0312 15:19:59.892249 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ptct"] Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.129779 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555480-9j2vd"] Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.131416 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-9j2vd" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.134262 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.134284 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.134626 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.141769 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-9j2vd"] Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.303753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tpdz\" (UniqueName: \"kubernetes.io/projected/92cd9e7b-9280-491d-93d3-1bfa4990dd36-kube-api-access-2tpdz\") pod \"auto-csr-approver-29555480-9j2vd\" (UID: \"92cd9e7b-9280-491d-93d3-1bfa4990dd36\") " pod="openshift-infra/auto-csr-approver-29555480-9j2vd" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.405215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tpdz\" (UniqueName: \"kubernetes.io/projected/92cd9e7b-9280-491d-93d3-1bfa4990dd36-kube-api-access-2tpdz\") pod \"auto-csr-approver-29555480-9j2vd\" (UID: \"92cd9e7b-9280-491d-93d3-1bfa4990dd36\") " pod="openshift-infra/auto-csr-approver-29555480-9j2vd" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.428105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tpdz\" (UniqueName: \"kubernetes.io/projected/92cd9e7b-9280-491d-93d3-1bfa4990dd36-kube-api-access-2tpdz\") pod \"auto-csr-approver-29555480-9j2vd\" (UID: \"92cd9e7b-9280-491d-93d3-1bfa4990dd36\") " pod="openshift-infra/auto-csr-approver-29555480-9j2vd" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.617430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-9j2vd" Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.618028 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce400e6f-fdd2-48e7-98bf-bc68d986e829" containerID="52f3990e221a5b1e067110ffb044be3b685e0bc524be569b812b398b93e32300" exitCode=0 Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.618109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptct" event={"ID":"ce400e6f-fdd2-48e7-98bf-bc68d986e829","Type":"ContainerDied","Data":"52f3990e221a5b1e067110ffb044be3b685e0bc524be569b812b398b93e32300"} Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.618137 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptct" event={"ID":"ce400e6f-fdd2-48e7-98bf-bc68d986e829","Type":"ContainerStarted","Data":"47ecf13eba728c2255346df72028f63588d8c653105c93363ee373194219f4f2"} Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.640925 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" event={"ID":"1a5acff7-1fff-414e-9ad1-b4b8116f73d4","Type":"ContainerStarted","Data":"480b8866d92b939260f08721258ecf2bcc530e032e0f96a4df64324adb0f3544"} Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.640989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" event={"ID":"1a5acff7-1fff-414e-9ad1-b4b8116f73d4","Type":"ContainerStarted","Data":"cb223b215a6f95632ca7d905a48a15b516fb006f888e877d2ea91b4b3984da71"} Mar 12 15:20:00 crc kubenswrapper[4832]: I0312 15:20:00.670125 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" podStartSLOduration=2.195128515 podStartE2EDuration="2.670101394s" podCreationTimestamp="2026-03-12 15:19:58 +0000 UTC" firstStartedPulling="2026-03-12 15:19:59.610117333 +0000 UTC m=+1958.254131549" lastFinishedPulling="2026-03-12 15:20:00.085090202 +0000 UTC m=+1958.729104428" observedRunningTime="2026-03-12 15:20:00.667167891 +0000 UTC m=+1959.311182127" watchObservedRunningTime="2026-03-12 15:20:00.670101394 +0000 UTC m=+1959.314115640" Mar 12 15:20:01 crc kubenswrapper[4832]: I0312 15:20:01.072420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-9j2vd"] Mar 12 15:20:01 crc kubenswrapper[4832]: W0312 15:20:01.075649 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92cd9e7b_9280_491d_93d3_1bfa4990dd36.slice/crio-a1f7641d8edeeef9f3dcbcb1d3bffd99cbb76d21f0bc2da13b39cfe68bd43f9b WatchSource:0}: Error finding container a1f7641d8edeeef9f3dcbcb1d3bffd99cbb76d21f0bc2da13b39cfe68bd43f9b: Status 404 returned error can't find the container with id a1f7641d8edeeef9f3dcbcb1d3bffd99cbb76d21f0bc2da13b39cfe68bd43f9b Mar 12 15:20:01 crc kubenswrapper[4832]: I0312 15:20:01.634575 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555480-9j2vd" event={"ID":"92cd9e7b-9280-491d-93d3-1bfa4990dd36","Type":"ContainerStarted","Data":"a1f7641d8edeeef9f3dcbcb1d3bffd99cbb76d21f0bc2da13b39cfe68bd43f9b"} Mar 12 15:20:02 crc kubenswrapper[4832]: I0312 15:20:02.051175 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2qlz2"] Mar 12 15:20:02 crc kubenswrapper[4832]: I0312 15:20:02.062671 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2qlz2"] Mar 12 15:20:02 crc kubenswrapper[4832]: I0312 15:20:02.631733 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79232ba7-6284-417a-9de7-8ba849bbeb7d" path="/var/lib/kubelet/pods/79232ba7-6284-417a-9de7-8ba849bbeb7d/volumes" Mar 12 15:20:03 crc kubenswrapper[4832]: I0312 15:20:03.655836 4832 generic.go:334] "Generic (PLEG): container finished" podID="92cd9e7b-9280-491d-93d3-1bfa4990dd36" containerID="c8d40d98c0122a764d4c20c98f3a923d611ba9d7ef2bd04c3f35bd991f930cb0" exitCode=0 Mar 12 15:20:03 crc kubenswrapper[4832]: I0312 15:20:03.655880 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555480-9j2vd" event={"ID":"92cd9e7b-9280-491d-93d3-1bfa4990dd36","Type":"ContainerDied","Data":"c8d40d98c0122a764d4c20c98f3a923d611ba9d7ef2bd04c3f35bd991f930cb0"} Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.038706 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-9j2vd" Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.127374 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tpdz\" (UniqueName: \"kubernetes.io/projected/92cd9e7b-9280-491d-93d3-1bfa4990dd36-kube-api-access-2tpdz\") pod \"92cd9e7b-9280-491d-93d3-1bfa4990dd36\" (UID: \"92cd9e7b-9280-491d-93d3-1bfa4990dd36\") " Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.136734 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cd9e7b-9280-491d-93d3-1bfa4990dd36-kube-api-access-2tpdz" (OuterVolumeSpecName: "kube-api-access-2tpdz") pod "92cd9e7b-9280-491d-93d3-1bfa4990dd36" (UID: "92cd9e7b-9280-491d-93d3-1bfa4990dd36"). InnerVolumeSpecName "kube-api-access-2tpdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.230451 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tpdz\" (UniqueName: \"kubernetes.io/projected/92cd9e7b-9280-491d-93d3-1bfa4990dd36-kube-api-access-2tpdz\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.675334 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce400e6f-fdd2-48e7-98bf-bc68d986e829" containerID="e6ca519465256e7856eda68dc72b1c9b13d4b94ea1a1b1d63f16bb3babafcdf3" exitCode=0 Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.675426 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptct" event={"ID":"ce400e6f-fdd2-48e7-98bf-bc68d986e829","Type":"ContainerDied","Data":"e6ca519465256e7856eda68dc72b1c9b13d4b94ea1a1b1d63f16bb3babafcdf3"} Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.677483 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555480-9j2vd" event={"ID":"92cd9e7b-9280-491d-93d3-1bfa4990dd36","Type":"ContainerDied","Data":"a1f7641d8edeeef9f3dcbcb1d3bffd99cbb76d21f0bc2da13b39cfe68bd43f9b"} Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.677544 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f7641d8edeeef9f3dcbcb1d3bffd99cbb76d21f0bc2da13b39cfe68bd43f9b" Mar 12 15:20:05 crc kubenswrapper[4832]: I0312 15:20:05.677579 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-9j2vd" Mar 12 15:20:06 crc kubenswrapper[4832]: I0312 15:20:06.097448 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-q5txp"] Mar 12 15:20:06 crc kubenswrapper[4832]: I0312 15:20:06.106476 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-q5txp"] Mar 12 15:20:06 crc kubenswrapper[4832]: I0312 15:20:06.644603 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9129c0-c103-4451-aa21-684915e37eeb" path="/var/lib/kubelet/pods/1d9129c0-c103-4451-aa21-684915e37eeb/volumes" Mar 12 15:20:06 crc kubenswrapper[4832]: I0312 15:20:06.688185 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptct" event={"ID":"ce400e6f-fdd2-48e7-98bf-bc68d986e829","Type":"ContainerStarted","Data":"f249ae1f2fefcbbbe7a083bbf30d6fd6d40e246ec456b258eef815e2b6ef4db3"} Mar 12 15:20:06 crc kubenswrapper[4832]: I0312 15:20:06.713683 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5ptct" podStartSLOduration=3.258869013 podStartE2EDuration="8.713657509s" podCreationTimestamp="2026-03-12 15:19:58 +0000 UTC" firstStartedPulling="2026-03-12 15:20:00.620937595 +0000 UTC m=+1959.264951831" lastFinishedPulling="2026-03-12 15:20:06.075726091 +0000 UTC m=+1964.719740327" observedRunningTime="2026-03-12 15:20:06.705887008 +0000 UTC m=+1965.349901264" watchObservedRunningTime="2026-03-12 15:20:06.713657509 +0000 UTC m=+1965.357671755" Mar 12 15:20:09 crc kubenswrapper[4832]: I0312 15:20:09.418265 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:20:09 crc kubenswrapper[4832]: I0312 15:20:09.418905 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:20:09 crc kubenswrapper[4832]: I0312 15:20:09.490047 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:20:09 crc kubenswrapper[4832]: I0312 15:20:09.724193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" event={"ID":"1a5acff7-1fff-414e-9ad1-b4b8116f73d4","Type":"ContainerDied","Data":"480b8866d92b939260f08721258ecf2bcc530e032e0f96a4df64324adb0f3544"} Mar 12 15:20:09 crc kubenswrapper[4832]: I0312 15:20:09.724144 4832 generic.go:334] "Generic (PLEG): container finished" podID="1a5acff7-1fff-414e-9ad1-b4b8116f73d4" containerID="480b8866d92b939260f08721258ecf2bcc530e032e0f96a4df64324adb0f3544" exitCode=0 Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.216157 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.254829 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-ssh-key-openstack-edpm-ipam\") pod \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.255092 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-inventory\") pod \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.255169 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crk25\" (UniqueName: \"kubernetes.io/projected/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-kube-api-access-crk25\") pod \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\" (UID: \"1a5acff7-1fff-414e-9ad1-b4b8116f73d4\") " Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.261223 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-kube-api-access-crk25" (OuterVolumeSpecName: "kube-api-access-crk25") pod "1a5acff7-1fff-414e-9ad1-b4b8116f73d4" (UID: "1a5acff7-1fff-414e-9ad1-b4b8116f73d4"). InnerVolumeSpecName "kube-api-access-crk25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.285874 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-inventory" (OuterVolumeSpecName: "inventory") pod "1a5acff7-1fff-414e-9ad1-b4b8116f73d4" (UID: "1a5acff7-1fff-414e-9ad1-b4b8116f73d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.294141 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a5acff7-1fff-414e-9ad1-b4b8116f73d4" (UID: "1a5acff7-1fff-414e-9ad1-b4b8116f73d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.357295 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.357351 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.357361 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crk25\" (UniqueName: \"kubernetes.io/projected/1a5acff7-1fff-414e-9ad1-b4b8116f73d4-kube-api-access-crk25\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.749168 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" event={"ID":"1a5acff7-1fff-414e-9ad1-b4b8116f73d4","Type":"ContainerDied","Data":"cb223b215a6f95632ca7d905a48a15b516fb006f888e877d2ea91b4b3984da71"} Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.749231 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb223b215a6f95632ca7d905a48a15b516fb006f888e877d2ea91b4b3984da71" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.749320 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.833267 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl"] Mar 12 15:20:11 crc kubenswrapper[4832]: E0312 15:20:11.833797 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cd9e7b-9280-491d-93d3-1bfa4990dd36" containerName="oc" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.833837 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cd9e7b-9280-491d-93d3-1bfa4990dd36" containerName="oc" Mar 12 15:20:11 crc kubenswrapper[4832]: E0312 15:20:11.833879 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5acff7-1fff-414e-9ad1-b4b8116f73d4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.833893 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5acff7-1fff-414e-9ad1-b4b8116f73d4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.834213 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cd9e7b-9280-491d-93d3-1bfa4990dd36" containerName="oc" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.834265 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5acff7-1fff-414e-9ad1-b4b8116f73d4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.835189 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.842393 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.842704 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.842815 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.842915 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.844699 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.845819 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.846018 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.846141 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.846731 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl"] Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.969811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.969878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.969905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.969929 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.969945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.969972 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970004 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970025 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970090 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970113 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970141 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:11 crc kubenswrapper[4832]: I0312 15:20:11.970170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pl9h\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-kube-api-access-8pl9h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072388 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pl9h\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-kube-api-access-8pl9h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072672 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072735 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.072901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.076855 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.077871 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.076980 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.077924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.076957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.079093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.079111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.079175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.079745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.080278 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.080810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.081334 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.087628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pl9h\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-kube-api-access-8pl9h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.091103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.158145 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.706489 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl"] Mar 12 15:20:12 crc kubenswrapper[4832]: I0312 15:20:12.758672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" event={"ID":"298f7208-9759-4481-973f-2cd1da3c5d64","Type":"ContainerStarted","Data":"011d8a6ebbcd339c81d9ab0b2a89beb0daa9b74aa35cd30ced8234fff4f37212"} Mar 12 15:20:13 crc kubenswrapper[4832]: I0312 15:20:13.768247 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" event={"ID":"298f7208-9759-4481-973f-2cd1da3c5d64","Type":"ContainerStarted","Data":"294528339abe8b15f3073b1fa565286b3ab2a46fab8bb35a4ebe2dd653df5ab1"} Mar 12 15:20:19 crc kubenswrapper[4832]: I0312 15:20:19.476876 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5ptct" Mar 12 15:20:19 crc kubenswrapper[4832]: I0312 15:20:19.497820 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" podStartSLOduration=8.001736859 podStartE2EDuration="8.497800119s" podCreationTimestamp="2026-03-12 15:20:11 +0000 UTC" firstStartedPulling="2026-03-12 15:20:12.706066658 +0000 UTC m=+1971.350080914" lastFinishedPulling="2026-03-12 15:20:13.202129908 +0000 UTC m=+1971.846144174" observedRunningTime="2026-03-12 15:20:13.795468517 +0000 UTC m=+1972.439482733" watchObservedRunningTime="2026-03-12 15:20:19.497800119 +0000 UTC m=+1978.141814335" Mar 12 15:20:19 crc kubenswrapper[4832]: I0312 15:20:19.545455 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ptct"] Mar 12 15:20:19 crc kubenswrapper[4832]: I0312 15:20:19.590481 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tx8p"] Mar 12 15:20:19 crc kubenswrapper[4832]: I0312 15:20:19.590773 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6tx8p" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="registry-server" containerID="cri-o://93ecff1fe3819d3c9354b7cee254a7f2992fcc41d9c1b8e99d6eeca1aaa66d3f" gracePeriod=2 Mar 12 15:20:19 crc kubenswrapper[4832]: I0312 15:20:19.847267 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerID="93ecff1fe3819d3c9354b7cee254a7f2992fcc41d9c1b8e99d6eeca1aaa66d3f" exitCode=0 Mar 12 15:20:19 crc kubenswrapper[4832]: I0312 15:20:19.847372 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tx8p" event={"ID":"2e1e3ef9-c646-4657-bbf7-009a8f0528e8","Type":"ContainerDied","Data":"93ecff1fe3819d3c9354b7cee254a7f2992fcc41d9c1b8e99d6eeca1aaa66d3f"} Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.094912 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.242980 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-utilities\") pod \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.243176 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-catalog-content\") pod \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.243246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glmnh\" (UniqueName: \"kubernetes.io/projected/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-kube-api-access-glmnh\") pod \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\" (UID: \"2e1e3ef9-c646-4657-bbf7-009a8f0528e8\") " Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.243754 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-utilities" (OuterVolumeSpecName: "utilities") pod "2e1e3ef9-c646-4657-bbf7-009a8f0528e8" (UID: "2e1e3ef9-c646-4657-bbf7-009a8f0528e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.249653 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-kube-api-access-glmnh" (OuterVolumeSpecName: "kube-api-access-glmnh") pod "2e1e3ef9-c646-4657-bbf7-009a8f0528e8" (UID: "2e1e3ef9-c646-4657-bbf7-009a8f0528e8"). InnerVolumeSpecName "kube-api-access-glmnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.296263 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e1e3ef9-c646-4657-bbf7-009a8f0528e8" (UID: "2e1e3ef9-c646-4657-bbf7-009a8f0528e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.345340 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.345372 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glmnh\" (UniqueName: \"kubernetes.io/projected/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-kube-api-access-glmnh\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.345383 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1e3ef9-c646-4657-bbf7-009a8f0528e8-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.857198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tx8p" event={"ID":"2e1e3ef9-c646-4657-bbf7-009a8f0528e8","Type":"ContainerDied","Data":"4fbc89a00db8de1507c26666ef4a2cb10425cebb8104c226158428e710909688"} Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.857570 4832 scope.go:117] "RemoveContainer" containerID="93ecff1fe3819d3c9354b7cee254a7f2992fcc41d9c1b8e99d6eeca1aaa66d3f" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.857283 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tx8p" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.887255 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tx8p"] Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.895050 4832 scope.go:117] "RemoveContainer" containerID="2cc7a363533470d46543bf015a846808b7001b14481a3309a113cd6182606cef" Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.901160 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6tx8p"] Mar 12 15:20:20 crc kubenswrapper[4832]: I0312 15:20:20.920667 4832 scope.go:117] "RemoveContainer" containerID="a36ad98b42aa7fac269d9eb1e8f8652ad98bf666ea0bb504ba5fb13647ecdc2c" Mar 12 15:20:22 crc kubenswrapper[4832]: I0312 15:20:22.638701 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" path="/var/lib/kubelet/pods/2e1e3ef9-c646-4657-bbf7-009a8f0528e8/volumes" Mar 12 15:20:50 crc kubenswrapper[4832]: I0312 15:20:50.965643 4832 scope.go:117] "RemoveContainer" containerID="4d304cb9b8b919f61719509a8ccb150ff6ee3c9e7daf6e7907cdda441868837d" Mar 12 15:20:51 crc kubenswrapper[4832]: I0312 15:20:51.033423 4832 scope.go:117] "RemoveContainer" containerID="07c7c634bdd9c7a9b17e685f6515bb01a0f0cc48d7e3922b2876195fb76617d4" Mar 12 15:20:51 crc kubenswrapper[4832]: I0312 15:20:51.186703 4832 generic.go:334] "Generic (PLEG): container finished" podID="298f7208-9759-4481-973f-2cd1da3c5d64" containerID="294528339abe8b15f3073b1fa565286b3ab2a46fab8bb35a4ebe2dd653df5ab1" exitCode=0 Mar 12 15:20:51 crc kubenswrapper[4832]: I0312 15:20:51.186775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" event={"ID":"298f7208-9759-4481-973f-2cd1da3c5d64","Type":"ContainerDied","Data":"294528339abe8b15f3073b1fa565286b3ab2a46fab8bb35a4ebe2dd653df5ab1"} Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.589942 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765256 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-nova-combined-ca-bundle\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765345 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765362 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-repo-setup-combined-ca-bundle\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765385 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pl9h\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-kube-api-access-8pl9h\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765407 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-inventory\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ssh-key-openstack-edpm-ipam\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ovn-combined-ca-bundle\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765494 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-bootstrap-combined-ca-bundle\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-neutron-metadata-combined-ca-bundle\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765653 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-telemetry-combined-ca-bundle\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765707 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-ovn-default-certs-0\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.765746 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-libvirt-combined-ca-bundle\") pod \"298f7208-9759-4481-973f-2cd1da3c5d64\" (UID: \"298f7208-9759-4481-973f-2cd1da3c5d64\") " Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.770932 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.770981 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.771331 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.772017 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.772188 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.773407 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.774069 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.774221 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.774292 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.775053 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.775526 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-kube-api-access-8pl9h" (OuterVolumeSpecName: "kube-api-access-8pl9h") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "kube-api-access-8pl9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.778717 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.797870 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-inventory" (OuterVolumeSpecName: "inventory") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.816746 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "298f7208-9759-4481-973f-2cd1da3c5d64" (UID: "298f7208-9759-4481-973f-2cd1da3c5d64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868202 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868239 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868257 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868272 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868284 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868298 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868310 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868322 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pl9h\" (UniqueName: \"kubernetes.io/projected/298f7208-9759-4481-973f-2cd1da3c5d64-kube-api-access-8pl9h\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868380 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868393 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868406 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868417 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868451 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:52 crc kubenswrapper[4832]: I0312 15:20:52.868462 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298f7208-9759-4481-973f-2cd1da3c5d64-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.209670 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" event={"ID":"298f7208-9759-4481-973f-2cd1da3c5d64","Type":"ContainerDied","Data":"011d8a6ebbcd339c81d9ab0b2a89beb0daa9b74aa35cd30ced8234fff4f37212"} Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.209736 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="011d8a6ebbcd339c81d9ab0b2a89beb0daa9b74aa35cd30ced8234fff4f37212" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.209738 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.332076 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd"] Mar 12 15:20:53 crc kubenswrapper[4832]: E0312 15:20:53.332476 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="extract-content" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.332495 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="extract-content" Mar 12 15:20:53 crc kubenswrapper[4832]: E0312 15:20:53.332545 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298f7208-9759-4481-973f-2cd1da3c5d64" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.332556 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="298f7208-9759-4481-973f-2cd1da3c5d64" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 15:20:53 crc kubenswrapper[4832]: E0312 15:20:53.332577 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="registry-server" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.332586 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="registry-server" Mar 12 15:20:53 crc kubenswrapper[4832]: E0312 15:20:53.332603 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="extract-utilities" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.332611 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="extract-utilities" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.332879 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1e3ef9-c646-4657-bbf7-009a8f0528e8" containerName="registry-server" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.332913 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="298f7208-9759-4481-973f-2cd1da3c5d64" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.334225 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.342571 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd"] Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.376468 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.376582 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.376630 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.376931 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.377008 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.478933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.479266 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pd49\" (UniqueName: \"kubernetes.io/projected/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-kube-api-access-7pd49\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.479385 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.479427 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.479479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.581909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.582828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pd49\" (UniqueName: \"kubernetes.io/projected/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-kube-api-access-7pd49\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.582886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.582953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.583001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.584341 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.586238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.586868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.592823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.603048 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pd49\" (UniqueName: \"kubernetes.io/projected/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-kube-api-access-7pd49\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8bfwd\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:53 crc kubenswrapper[4832]: I0312 15:20:53.697711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:20:54 crc kubenswrapper[4832]: I0312 15:20:54.207861 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd"] Mar 12 15:20:54 crc kubenswrapper[4832]: W0312 15:20:54.208413 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf20eb1c2_5228_44bb_a4c8_f6bb88a8fd89.slice/crio-30adf4cde9a1f887069e7f8ee74ee369ff19aa97922fc9d5825ce5ea896be88d WatchSource:0}: Error finding container 30adf4cde9a1f887069e7f8ee74ee369ff19aa97922fc9d5825ce5ea896be88d: Status 404 returned error can't find the container with id 30adf4cde9a1f887069e7f8ee74ee369ff19aa97922fc9d5825ce5ea896be88d Mar 12 15:20:54 crc kubenswrapper[4832]: I0312 15:20:54.228666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" event={"ID":"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89","Type":"ContainerStarted","Data":"30adf4cde9a1f887069e7f8ee74ee369ff19aa97922fc9d5825ce5ea896be88d"} Mar 12 15:20:55 crc kubenswrapper[4832]: I0312 15:20:55.263676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" event={"ID":"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89","Type":"ContainerStarted","Data":"100c66c8a977ce34819c737e912b0728b54d6f52498c7cd9a7cf954cb69f0b27"} Mar 12 15:20:55 crc kubenswrapper[4832]: I0312 15:20:55.298721 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" podStartSLOduration=1.6933095 podStartE2EDuration="2.298696892s" podCreationTimestamp="2026-03-12 15:20:53 +0000 UTC" firstStartedPulling="2026-03-12 15:20:54.212860044 +0000 UTC m=+2012.856874270" lastFinishedPulling="2026-03-12 15:20:54.818247376 +0000 UTC m=+2013.462261662" observedRunningTime="2026-03-12 15:20:55.287709539 +0000 UTC m=+2013.931723775" watchObservedRunningTime="2026-03-12 15:20:55.298696892 +0000 UTC m=+2013.942711128" Mar 12 15:21:26 crc kubenswrapper[4832]: I0312 15:21:26.315354 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:21:26 crc kubenswrapper[4832]: I0312 15:21:26.316014 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:21:56 crc kubenswrapper[4832]: I0312 15:21:56.315126 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:21:56 crc kubenswrapper[4832]: I0312 15:21:56.315997 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.152870 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555482-d68mx"] Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.154561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-d68mx" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.157293 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.157634 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.158575 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.166697 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-d68mx"] Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.196092 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhdr\" (UniqueName: \"kubernetes.io/projected/bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9-kube-api-access-2hhdr\") pod \"auto-csr-approver-29555482-d68mx\" (UID: \"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9\") " pod="openshift-infra/auto-csr-approver-29555482-d68mx" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.298735 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hhdr\" (UniqueName: \"kubernetes.io/projected/bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9-kube-api-access-2hhdr\") pod \"auto-csr-approver-29555482-d68mx\" (UID: \"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9\") " pod="openshift-infra/auto-csr-approver-29555482-d68mx" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.318411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hhdr\" (UniqueName: \"kubernetes.io/projected/bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9-kube-api-access-2hhdr\") pod \"auto-csr-approver-29555482-d68mx\" (UID: \"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9\") " pod="openshift-infra/auto-csr-approver-29555482-d68mx" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.478334 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-d68mx" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.940649 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bx9rs"] Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.942658 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.971222 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bx9rs"] Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.985799 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-d68mx"] Mar 12 15:22:00 crc kubenswrapper[4832]: W0312 15:22:00.995249 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc14c7b9_5c50_4f8f_b601_2721bdb7d9b9.slice/crio-1a16f19b4c7cf83aca4d541862036a2786a821c7919bfab3649c998a0bb15661 WatchSource:0}: Error finding container 1a16f19b4c7cf83aca4d541862036a2786a821c7919bfab3649c998a0bb15661: Status 404 returned error can't find the container with id 1a16f19b4c7cf83aca4d541862036a2786a821c7919bfab3649c998a0bb15661 Mar 12 15:22:00 crc kubenswrapper[4832]: I0312 15:22:00.999487 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.013829 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hk2m\" (UniqueName: \"kubernetes.io/projected/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-kube-api-access-7hk2m\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.014307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-utilities\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.014453 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-catalog-content\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.116722 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-utilities\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.116821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-catalog-content\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.116878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hk2m\" (UniqueName: \"kubernetes.io/projected/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-kube-api-access-7hk2m\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.117427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-utilities\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.117550 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-catalog-content\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.137568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hk2m\" (UniqueName: \"kubernetes.io/projected/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-kube-api-access-7hk2m\") pod \"redhat-operators-bx9rs\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.158483 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555482-d68mx" event={"ID":"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9","Type":"ContainerStarted","Data":"1a16f19b4c7cf83aca4d541862036a2786a821c7919bfab3649c998a0bb15661"} Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.266675 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:01 crc kubenswrapper[4832]: I0312 15:22:01.704158 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bx9rs"] Mar 12 15:22:01 crc kubenswrapper[4832]: W0312 15:22:01.714476 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc7ce36_e51b_4a7a_9fb5_71dcb7154fd4.slice/crio-5f439f73eea9308e037565d400a3254c8988a416f3a232a79c5aeda306cbf9f7 WatchSource:0}: Error finding container 5f439f73eea9308e037565d400a3254c8988a416f3a232a79c5aeda306cbf9f7: Status 404 returned error can't find the container with id 5f439f73eea9308e037565d400a3254c8988a416f3a232a79c5aeda306cbf9f7 Mar 12 15:22:02 crc kubenswrapper[4832]: I0312 15:22:02.166892 4832 generic.go:334] "Generic (PLEG): container finished" podID="f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" containerID="100c66c8a977ce34819c737e912b0728b54d6f52498c7cd9a7cf954cb69f0b27" exitCode=0 Mar 12 15:22:02 crc kubenswrapper[4832]: I0312 15:22:02.167079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" event={"ID":"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89","Type":"ContainerDied","Data":"100c66c8a977ce34819c737e912b0728b54d6f52498c7cd9a7cf954cb69f0b27"} Mar 12 15:22:02 crc kubenswrapper[4832]: I0312 15:22:02.169224 4832 generic.go:334] "Generic (PLEG): container finished" podID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerID="6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f" exitCode=0 Mar 12 15:22:02 crc kubenswrapper[4832]: I0312 15:22:02.169258 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bx9rs" event={"ID":"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4","Type":"ContainerDied","Data":"6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f"} Mar 12 15:22:02 crc kubenswrapper[4832]: I0312 15:22:02.169280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bx9rs" event={"ID":"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4","Type":"ContainerStarted","Data":"5f439f73eea9308e037565d400a3254c8988a416f3a232a79c5aeda306cbf9f7"} Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.185270 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9" containerID="b92d28140951945ac31129c0d1508ede70c0818a0fec2e487563119129b0a1ad" exitCode=0 Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.185480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555482-d68mx" event={"ID":"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9","Type":"ContainerDied","Data":"b92d28140951945ac31129c0d1508ede70c0818a0fec2e487563119129b0a1ad"} Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.654343 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.764284 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovncontroller-config-0\") pod \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.764350 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ssh-key-openstack-edpm-ipam\") pod \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.764422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovn-combined-ca-bundle\") pod \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.764456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-inventory\") pod \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.764577 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pd49\" (UniqueName: \"kubernetes.io/projected/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-kube-api-access-7pd49\") pod \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\" (UID: \"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89\") " Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.784324 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-kube-api-access-7pd49" (OuterVolumeSpecName: "kube-api-access-7pd49") pod "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" (UID: "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89"). InnerVolumeSpecName "kube-api-access-7pd49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.785047 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" (UID: "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.793092 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" (UID: "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.801624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-inventory" (OuterVolumeSpecName: "inventory") pod "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" (UID: "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.801663 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" (UID: "f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.867888 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pd49\" (UniqueName: \"kubernetes.io/projected/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-kube-api-access-7pd49\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.867942 4832 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.867964 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.867983 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:03 crc kubenswrapper[4832]: I0312 15:22:03.868001 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.201185 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bx9rs" event={"ID":"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4","Type":"ContainerStarted","Data":"fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869"} Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.204208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" event={"ID":"f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89","Type":"ContainerDied","Data":"30adf4cde9a1f887069e7f8ee74ee369ff19aa97922fc9d5825ce5ea896be88d"} Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.204269 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30adf4cde9a1f887069e7f8ee74ee369ff19aa97922fc9d5825ce5ea896be88d" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.204284 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8bfwd" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.321822 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk"] Mar 12 15:22:04 crc kubenswrapper[4832]: E0312 15:22:04.322447 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.322468 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.322895 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.323781 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.331128 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.331235 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.331323 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.331358 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.331545 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.331609 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.350374 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk"] Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.503877 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.503980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6nn\" (UniqueName: \"kubernetes.io/projected/75cf2468-905f-4551-ba33-4c055f2ac4ce-kube-api-access-nb6nn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.504266 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.504335 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.504366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.504396 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.587736 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-d68mx" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.606137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.606242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.606278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.606307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.606382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.606420 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb6nn\" (UniqueName: \"kubernetes.io/projected/75cf2468-905f-4551-ba33-4c055f2ac4ce-kube-api-access-nb6nn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.612519 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.613214 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.615167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.619863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.619863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.626318 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb6nn\" (UniqueName: \"kubernetes.io/projected/75cf2468-905f-4551-ba33-4c055f2ac4ce-kube-api-access-nb6nn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.648454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.707814 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hhdr\" (UniqueName: \"kubernetes.io/projected/bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9-kube-api-access-2hhdr\") pod \"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9\" (UID: \"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9\") " Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.713194 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9-kube-api-access-2hhdr" (OuterVolumeSpecName: "kube-api-access-2hhdr") pod "bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9" (UID: "bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9"). InnerVolumeSpecName "kube-api-access-2hhdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:22:04 crc kubenswrapper[4832]: I0312 15:22:04.812212 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hhdr\" (UniqueName: \"kubernetes.io/projected/bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9-kube-api-access-2hhdr\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:05 crc kubenswrapper[4832]: I0312 15:22:05.221410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555482-d68mx" event={"ID":"bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9","Type":"ContainerDied","Data":"1a16f19b4c7cf83aca4d541862036a2786a821c7919bfab3649c998a0bb15661"} Mar 12 15:22:05 crc kubenswrapper[4832]: I0312 15:22:05.221809 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a16f19b4c7cf83aca4d541862036a2786a821c7919bfab3649c998a0bb15661" Mar 12 15:22:05 crc kubenswrapper[4832]: I0312 15:22:05.221443 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-d68mx" Mar 12 15:22:05 crc kubenswrapper[4832]: I0312 15:22:05.265886 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk"] Mar 12 15:22:05 crc kubenswrapper[4832]: W0312 15:22:05.271252 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75cf2468_905f_4551_ba33_4c055f2ac4ce.slice/crio-5b543698cb2aa1c43bdb484e2895d78c1cc42d7d9bad7efc1e74a74560972384 WatchSource:0}: Error finding container 5b543698cb2aa1c43bdb484e2895d78c1cc42d7d9bad7efc1e74a74560972384: Status 404 returned error can't find the container with id 5b543698cb2aa1c43bdb484e2895d78c1cc42d7d9bad7efc1e74a74560972384 Mar 12 15:22:05 crc kubenswrapper[4832]: I0312 15:22:05.669876 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-swzzl"] Mar 12 15:22:05 crc kubenswrapper[4832]: I0312 15:22:05.678226 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-swzzl"] Mar 12 15:22:06 crc kubenswrapper[4832]: I0312 15:22:06.231747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" event={"ID":"75cf2468-905f-4551-ba33-4c055f2ac4ce","Type":"ContainerStarted","Data":"20e48aa468b50c23cf712476c598e899d89ed28c3fe2c62dc6ceb9fd8cac074e"} Mar 12 15:22:06 crc kubenswrapper[4832]: I0312 15:22:06.232125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" event={"ID":"75cf2468-905f-4551-ba33-4c055f2ac4ce","Type":"ContainerStarted","Data":"5b543698cb2aa1c43bdb484e2895d78c1cc42d7d9bad7efc1e74a74560972384"} Mar 12 15:22:06 crc kubenswrapper[4832]: I0312 15:22:06.251374 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" podStartSLOduration=1.837717113 podStartE2EDuration="2.251349297s" podCreationTimestamp="2026-03-12 15:22:04 +0000 UTC" firstStartedPulling="2026-03-12 15:22:05.275812859 +0000 UTC m=+2083.919827095" lastFinishedPulling="2026-03-12 15:22:05.689445053 +0000 UTC m=+2084.333459279" observedRunningTime="2026-03-12 15:22:06.250120252 +0000 UTC m=+2084.894134498" watchObservedRunningTime="2026-03-12 15:22:06.251349297 +0000 UTC m=+2084.895363533" Mar 12 15:22:06 crc kubenswrapper[4832]: I0312 15:22:06.632705 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c44e724-8877-4898-bb2b-d5acc63d0168" path="/var/lib/kubelet/pods/1c44e724-8877-4898-bb2b-d5acc63d0168/volumes" Mar 12 15:22:07 crc kubenswrapper[4832]: I0312 15:22:07.248622 4832 generic.go:334] "Generic (PLEG): container finished" podID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerID="fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869" exitCode=0 Mar 12 15:22:07 crc kubenswrapper[4832]: I0312 15:22:07.248664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bx9rs" event={"ID":"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4","Type":"ContainerDied","Data":"fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869"} Mar 12 15:22:08 crc kubenswrapper[4832]: I0312 15:22:08.266298 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bx9rs" event={"ID":"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4","Type":"ContainerStarted","Data":"33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b"} Mar 12 15:22:08 crc kubenswrapper[4832]: I0312 15:22:08.296096 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bx9rs" podStartSLOduration=2.813203603 podStartE2EDuration="8.296073259s" podCreationTimestamp="2026-03-12 15:22:00 +0000 UTC" firstStartedPulling="2026-03-12 15:22:02.170965782 +0000 UTC m=+2080.814980008" lastFinishedPulling="2026-03-12 15:22:07.653835438 +0000 UTC m=+2086.297849664" observedRunningTime="2026-03-12 15:22:08.288500053 +0000 UTC m=+2086.932514309" watchObservedRunningTime="2026-03-12 15:22:08.296073259 +0000 UTC m=+2086.940087505" Mar 12 15:22:11 crc kubenswrapper[4832]: I0312 15:22:11.266915 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:11 crc kubenswrapper[4832]: I0312 15:22:11.267580 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:12 crc kubenswrapper[4832]: I0312 15:22:12.335815 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bx9rs" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="registry-server" probeResult="failure" output=< Mar 12 15:22:12 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Mar 12 15:22:12 crc kubenswrapper[4832]: > Mar 12 15:22:21 crc kubenswrapper[4832]: I0312 15:22:21.345699 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:21 crc kubenswrapper[4832]: I0312 15:22:21.398298 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:21 crc kubenswrapper[4832]: I0312 15:22:21.596547 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bx9rs"] Mar 12 15:22:22 crc kubenswrapper[4832]: I0312 15:22:22.425873 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bx9rs" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="registry-server" containerID="cri-o://33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b" gracePeriod=2 Mar 12 15:22:22 crc kubenswrapper[4832]: I0312 15:22:22.921585 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.092912 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-catalog-content\") pod \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.093084 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-utilities\") pod \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.093333 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hk2m\" (UniqueName: \"kubernetes.io/projected/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-kube-api-access-7hk2m\") pod \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\" (UID: \"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4\") " Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.094298 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-utilities" (OuterVolumeSpecName: "utilities") pod "ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" (UID: "ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.103103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-kube-api-access-7hk2m" (OuterVolumeSpecName: "kube-api-access-7hk2m") pod "ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" (UID: "ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4"). InnerVolumeSpecName "kube-api-access-7hk2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.196160 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.196211 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hk2m\" (UniqueName: \"kubernetes.io/projected/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-kube-api-access-7hk2m\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.234334 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" (UID: "ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.298319 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.436416 4832 generic.go:334] "Generic (PLEG): container finished" podID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerID="33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b" exitCode=0 Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.436467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bx9rs" event={"ID":"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4","Type":"ContainerDied","Data":"33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b"} Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.436546 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bx9rs" event={"ID":"ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4","Type":"ContainerDied","Data":"5f439f73eea9308e037565d400a3254c8988a416f3a232a79c5aeda306cbf9f7"} Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.436567 4832 scope.go:117] "RemoveContainer" containerID="33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.436697 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bx9rs" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.458149 4832 scope.go:117] "RemoveContainer" containerID="fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.499581 4832 scope.go:117] "RemoveContainer" containerID="6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.509697 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bx9rs"] Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.525073 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bx9rs"] Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.540748 4832 scope.go:117] "RemoveContainer" containerID="33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b" Mar 12 15:22:23 crc kubenswrapper[4832]: E0312 15:22:23.541387 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b\": container with ID starting with 33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b not found: ID does not exist" containerID="33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.541457 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b"} err="failed to get container status \"33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b\": rpc error: code = NotFound desc = could not find container \"33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b\": container with ID starting with 33b345da76828bd2dae7a89d59e14611325b35c543d246a09c53a38456559c6b not found: ID does not exist" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.541540 4832 scope.go:117] "RemoveContainer" containerID="fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869" Mar 12 15:22:23 crc kubenswrapper[4832]: E0312 15:22:23.541918 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869\": container with ID starting with fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869 not found: ID does not exist" containerID="fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.541973 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869"} err="failed to get container status \"fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869\": rpc error: code = NotFound desc = could not find container \"fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869\": container with ID starting with fe542da6b1b11519bf811a1d4c4d87394a90842094f24c9e551e7d572341c869 not found: ID does not exist" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.542002 4832 scope.go:117] "RemoveContainer" containerID="6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f" Mar 12 15:22:23 crc kubenswrapper[4832]: E0312 15:22:23.542479 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f\": container with ID starting with 6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f not found: ID does not exist" containerID="6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f" Mar 12 15:22:23 crc kubenswrapper[4832]: I0312 15:22:23.542556 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f"} err="failed to get container status \"6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f\": rpc error: code = NotFound desc = could not find container \"6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f\": container with ID starting with 6b0b01d6dcc5cc2124d7d6327824d277304536e1da5ae7294c834669ebb7db3f not found: ID does not exist" Mar 12 15:22:24 crc kubenswrapper[4832]: I0312 15:22:24.633616 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" path="/var/lib/kubelet/pods/ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4/volumes" Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.314432 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.314817 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.314866 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.315643 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8770e7b9c1d71d91f69b085e39e97d28156baef5e8fd0f0f7513dd17570fa130"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.315712 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://8770e7b9c1d71d91f69b085e39e97d28156baef5e8fd0f0f7513dd17570fa130" gracePeriod=600 Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.475017 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="8770e7b9c1d71d91f69b085e39e97d28156baef5e8fd0f0f7513dd17570fa130" exitCode=0 Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.475078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"8770e7b9c1d71d91f69b085e39e97d28156baef5e8fd0f0f7513dd17570fa130"} Mar 12 15:22:26 crc kubenswrapper[4832]: I0312 15:22:26.475160 4832 scope.go:117] "RemoveContainer" containerID="f334e337b3b174a46e4c20d97e565389962cac563ced727aa8b9b223d38cd3c3" Mar 12 15:22:27 crc kubenswrapper[4832]: I0312 15:22:27.489419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146"} Mar 12 15:22:51 crc kubenswrapper[4832]: I0312 15:22:51.183631 4832 scope.go:117] "RemoveContainer" containerID="98ef35b8ebe8bff39e8497a69d2cc7e484b29a04905dbfd3d19a32d4efaf8df7" Mar 12 15:22:56 crc kubenswrapper[4832]: I0312 15:22:56.792387 4832 generic.go:334] "Generic (PLEG): container finished" podID="75cf2468-905f-4551-ba33-4c055f2ac4ce" containerID="20e48aa468b50c23cf712476c598e899d89ed28c3fe2c62dc6ceb9fd8cac074e" exitCode=0 Mar 12 15:22:56 crc kubenswrapper[4832]: I0312 15:22:56.792917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" event={"ID":"75cf2468-905f-4551-ba33-4c055f2ac4ce","Type":"ContainerDied","Data":"20e48aa468b50c23cf712476c598e899d89ed28c3fe2c62dc6ceb9fd8cac074e"} Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.238626 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.297672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-ssh-key-openstack-edpm-ipam\") pod \"75cf2468-905f-4551-ba33-4c055f2ac4ce\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.297731 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-nova-metadata-neutron-config-0\") pod \"75cf2468-905f-4551-ba33-4c055f2ac4ce\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.297892 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb6nn\" (UniqueName: \"kubernetes.io/projected/75cf2468-905f-4551-ba33-4c055f2ac4ce-kube-api-access-nb6nn\") pod \"75cf2468-905f-4551-ba33-4c055f2ac4ce\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.297932 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"75cf2468-905f-4551-ba33-4c055f2ac4ce\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.297966 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-inventory\") pod \"75cf2468-905f-4551-ba33-4c055f2ac4ce\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.298080 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-metadata-combined-ca-bundle\") pod \"75cf2468-905f-4551-ba33-4c055f2ac4ce\" (UID: \"75cf2468-905f-4551-ba33-4c055f2ac4ce\") " Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.304741 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cf2468-905f-4551-ba33-4c055f2ac4ce-kube-api-access-nb6nn" (OuterVolumeSpecName: "kube-api-access-nb6nn") pod "75cf2468-905f-4551-ba33-4c055f2ac4ce" (UID: "75cf2468-905f-4551-ba33-4c055f2ac4ce"). InnerVolumeSpecName "kube-api-access-nb6nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.304843 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "75cf2468-905f-4551-ba33-4c055f2ac4ce" (UID: "75cf2468-905f-4551-ba33-4c055f2ac4ce"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.322745 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "75cf2468-905f-4551-ba33-4c055f2ac4ce" (UID: "75cf2468-905f-4551-ba33-4c055f2ac4ce"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.324928 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "75cf2468-905f-4551-ba33-4c055f2ac4ce" (UID: "75cf2468-905f-4551-ba33-4c055f2ac4ce"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.335926 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-inventory" (OuterVolumeSpecName: "inventory") pod "75cf2468-905f-4551-ba33-4c055f2ac4ce" (UID: "75cf2468-905f-4551-ba33-4c055f2ac4ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.338108 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "75cf2468-905f-4551-ba33-4c055f2ac4ce" (UID: "75cf2468-905f-4551-ba33-4c055f2ac4ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.409547 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.409585 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.409595 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb6nn\" (UniqueName: \"kubernetes.io/projected/75cf2468-905f-4551-ba33-4c055f2ac4ce-kube-api-access-nb6nn\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.409610 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.409621 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.409633 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cf2468-905f-4551-ba33-4c055f2ac4ce-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.821213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" event={"ID":"75cf2468-905f-4551-ba33-4c055f2ac4ce","Type":"ContainerDied","Data":"5b543698cb2aa1c43bdb484e2895d78c1cc42d7d9bad7efc1e74a74560972384"} Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.821275 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b543698cb2aa1c43bdb484e2895d78c1cc42d7d9bad7efc1e74a74560972384" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.821356 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.971309 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq"] Mar 12 15:22:58 crc kubenswrapper[4832]: E0312 15:22:58.971736 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="registry-server" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.971762 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="registry-server" Mar 12 15:22:58 crc kubenswrapper[4832]: E0312 15:22:58.971797 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="extract-content" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.971805 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="extract-content" Mar 12 15:22:58 crc kubenswrapper[4832]: E0312 15:22:58.971826 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="extract-utilities" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.971835 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="extract-utilities" Mar 12 15:22:58 crc kubenswrapper[4832]: E0312 15:22:58.971852 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9" containerName="oc" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.971860 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9" containerName="oc" Mar 12 15:22:58 crc kubenswrapper[4832]: E0312 15:22:58.971880 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cf2468-905f-4551-ba33-4c055f2ac4ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.971894 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cf2468-905f-4551-ba33-4c055f2ac4ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.972127 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="75cf2468-905f-4551-ba33-4c055f2ac4ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.972154 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc7ce36-e51b-4a7a-9fb5-71dcb7154fd4" containerName="registry-server" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.972194 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9" containerName="oc" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.973096 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.975202 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.975437 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.975655 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.975866 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.981645 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 12 15:22:58 crc kubenswrapper[4832]: I0312 15:22:58.985630 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq"] Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.020773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.020928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdff\" (UniqueName: \"kubernetes.io/projected/a043f4e6-f64c-4a26-ac04-93005bcc77d0-kube-api-access-5qdff\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.020977 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.021218 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.021536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.122814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.123145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.123189 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdff\" (UniqueName: \"kubernetes.io/projected/a043f4e6-f64c-4a26-ac04-93005bcc77d0-kube-api-access-5qdff\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.123211 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.123261 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.127046 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.127380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.129025 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.130960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.143331 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdff\" (UniqueName: \"kubernetes.io/projected/a043f4e6-f64c-4a26-ac04-93005bcc77d0-kube-api-access-5qdff\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wmghq\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.290778 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:22:59 crc kubenswrapper[4832]: I0312 15:22:59.843447 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq"] Mar 12 15:23:00 crc kubenswrapper[4832]: I0312 15:23:00.841632 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" event={"ID":"a043f4e6-f64c-4a26-ac04-93005bcc77d0","Type":"ContainerStarted","Data":"37e4dbbf4813b5c122820d52fffdaa28b20895c1251357ee7a78e68a5a1fee69"} Mar 12 15:23:00 crc kubenswrapper[4832]: I0312 15:23:00.842033 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" event={"ID":"a043f4e6-f64c-4a26-ac04-93005bcc77d0","Type":"ContainerStarted","Data":"ed3ef45529687b2a3f91d24bee38a99d69d7e7956ece787342fcd61d402385af"} Mar 12 15:23:00 crc kubenswrapper[4832]: I0312 15:23:00.862485 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" podStartSLOduration=2.256099085 podStartE2EDuration="2.862465933s" podCreationTimestamp="2026-03-12 15:22:58 +0000 UTC" firstStartedPulling="2026-03-12 15:22:59.844358676 +0000 UTC m=+2138.488372902" lastFinishedPulling="2026-03-12 15:23:00.450725524 +0000 UTC m=+2139.094739750" observedRunningTime="2026-03-12 15:23:00.85990884 +0000 UTC m=+2139.503923076" watchObservedRunningTime="2026-03-12 15:23:00.862465933 +0000 UTC m=+2139.506480159" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.272322 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdtqk"] Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.276390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.286041 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdtqk"] Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.377995 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-utilities\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.378177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-catalog-content\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.378342 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2cc\" (UniqueName: \"kubernetes.io/projected/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-kube-api-access-8x2cc\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.480281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-utilities\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.480660 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-catalog-content\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.480876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2cc\" (UniqueName: \"kubernetes.io/projected/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-kube-api-access-8x2cc\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.481022 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-utilities\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.481236 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-catalog-content\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.506266 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2cc\" (UniqueName: \"kubernetes.io/projected/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-kube-api-access-8x2cc\") pod \"certified-operators-fdtqk\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:47 crc kubenswrapper[4832]: I0312 15:23:47.598337 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:48 crc kubenswrapper[4832]: I0312 15:23:48.162132 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdtqk"] Mar 12 15:23:48 crc kubenswrapper[4832]: I0312 15:23:48.368725 4832 generic.go:334] "Generic (PLEG): container finished" podID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerID="415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e" exitCode=0 Mar 12 15:23:48 crc kubenswrapper[4832]: I0312 15:23:48.368858 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtqk" event={"ID":"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7","Type":"ContainerDied","Data":"415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e"} Mar 12 15:23:48 crc kubenswrapper[4832]: I0312 15:23:48.369038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtqk" event={"ID":"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7","Type":"ContainerStarted","Data":"f123fe6087b9b6760b3efdc30943c412a917f8d801d81d98d00ed5e801e07860"} Mar 12 15:23:49 crc kubenswrapper[4832]: I0312 15:23:49.378609 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtqk" event={"ID":"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7","Type":"ContainerStarted","Data":"f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3"} Mar 12 15:23:50 crc kubenswrapper[4832]: I0312 15:23:50.390231 4832 generic.go:334] "Generic (PLEG): container finished" podID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerID="f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3" exitCode=0 Mar 12 15:23:50 crc kubenswrapper[4832]: I0312 15:23:50.390339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtqk" event={"ID":"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7","Type":"ContainerDied","Data":"f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3"} Mar 12 15:23:51 crc kubenswrapper[4832]: I0312 15:23:51.401847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtqk" event={"ID":"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7","Type":"ContainerStarted","Data":"b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191"} Mar 12 15:23:51 crc kubenswrapper[4832]: I0312 15:23:51.418445 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdtqk" podStartSLOduration=1.989984848 podStartE2EDuration="4.418425962s" podCreationTimestamp="2026-03-12 15:23:47 +0000 UTC" firstStartedPulling="2026-03-12 15:23:48.37063361 +0000 UTC m=+2187.014647846" lastFinishedPulling="2026-03-12 15:23:50.799074744 +0000 UTC m=+2189.443088960" observedRunningTime="2026-03-12 15:23:51.417573288 +0000 UTC m=+2190.061587514" watchObservedRunningTime="2026-03-12 15:23:51.418425962 +0000 UTC m=+2190.062440188" Mar 12 15:23:57 crc kubenswrapper[4832]: I0312 15:23:57.598848 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:57 crc kubenswrapper[4832]: I0312 15:23:57.599418 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:57 crc kubenswrapper[4832]: I0312 15:23:57.661070 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:58 crc kubenswrapper[4832]: I0312 15:23:58.523443 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:23:58 crc kubenswrapper[4832]: I0312 15:23:58.578868 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdtqk"] Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.155316 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555484-tgpln"] Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.157591 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-tgpln" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.166874 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555484-tgpln"] Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.189761 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.189997 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.190469 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.331245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v69ph\" (UniqueName: \"kubernetes.io/projected/b81ca45b-eed0-4034-976f-582b26effa59-kube-api-access-v69ph\") pod \"auto-csr-approver-29555484-tgpln\" (UID: \"b81ca45b-eed0-4034-976f-582b26effa59\") " pod="openshift-infra/auto-csr-approver-29555484-tgpln" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.434152 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v69ph\" (UniqueName: \"kubernetes.io/projected/b81ca45b-eed0-4034-976f-582b26effa59-kube-api-access-v69ph\") pod \"auto-csr-approver-29555484-tgpln\" (UID: \"b81ca45b-eed0-4034-976f-582b26effa59\") " pod="openshift-infra/auto-csr-approver-29555484-tgpln" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.460497 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v69ph\" (UniqueName: \"kubernetes.io/projected/b81ca45b-eed0-4034-976f-582b26effa59-kube-api-access-v69ph\") pod \"auto-csr-approver-29555484-tgpln\" (UID: \"b81ca45b-eed0-4034-976f-582b26effa59\") " pod="openshift-infra/auto-csr-approver-29555484-tgpln" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.490748 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdtqk" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="registry-server" containerID="cri-o://b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191" gracePeriod=2 Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.510819 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-tgpln" Mar 12 15:24:00 crc kubenswrapper[4832]: I0312 15:24:00.945301 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555484-tgpln"] Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.026840 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.153285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-catalog-content\") pod \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.153403 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2cc\" (UniqueName: \"kubernetes.io/projected/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-kube-api-access-8x2cc\") pod \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.153498 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-utilities\") pod \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\" (UID: \"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7\") " Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.154666 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-utilities" (OuterVolumeSpecName: "utilities") pod "ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" (UID: "ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.159731 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-kube-api-access-8x2cc" (OuterVolumeSpecName: "kube-api-access-8x2cc") pod "ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" (UID: "ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7"). InnerVolumeSpecName "kube-api-access-8x2cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.220244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" (UID: "ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.255724 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.255761 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x2cc\" (UniqueName: \"kubernetes.io/projected/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-kube-api-access-8x2cc\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.255775 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.502778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555484-tgpln" event={"ID":"b81ca45b-eed0-4034-976f-582b26effa59","Type":"ContainerStarted","Data":"7e7926074fb0f282d8233611785f8b004d8828b020be8696f023984be308ba36"} Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.506407 4832 generic.go:334] "Generic (PLEG): container finished" podID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerID="b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191" exitCode=0 Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.506448 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtqk" event={"ID":"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7","Type":"ContainerDied","Data":"b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191"} Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.506471 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdtqk" event={"ID":"ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7","Type":"ContainerDied","Data":"f123fe6087b9b6760b3efdc30943c412a917f8d801d81d98d00ed5e801e07860"} Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.506494 4832 scope.go:117] "RemoveContainer" containerID="b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.506745 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdtqk" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.554649 4832 scope.go:117] "RemoveContainer" containerID="f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.578557 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdtqk"] Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.588676 4832 scope.go:117] "RemoveContainer" containerID="415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.612936 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdtqk"] Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.651000 4832 scope.go:117] "RemoveContainer" containerID="b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191" Mar 12 15:24:01 crc kubenswrapper[4832]: E0312 15:24:01.654992 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191\": container with ID starting with b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191 not found: ID does not exist" containerID="b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.655031 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191"} err="failed to get container status \"b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191\": rpc error: code = NotFound desc = could not find container \"b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191\": container with ID starting with b2ea53aca6e99cfdcd1a2ff6d284043feaf674fb9e4c6b299c31f3af818aa191 not found: ID does not exist" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.655054 4832 scope.go:117] "RemoveContainer" containerID="f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3" Mar 12 15:24:01 crc kubenswrapper[4832]: E0312 15:24:01.658829 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3\": container with ID starting with f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3 not found: ID does not exist" containerID="f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.658970 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3"} err="failed to get container status \"f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3\": rpc error: code = NotFound desc = could not find container \"f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3\": container with ID starting with f929896cce3e973500cef92af0dcea1d7710fcbc7faf141c6cf9ff2c454b9fb3 not found: ID does not exist" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.658999 4832 scope.go:117] "RemoveContainer" containerID="415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e" Mar 12 15:24:01 crc kubenswrapper[4832]: E0312 15:24:01.663911 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e\": container with ID starting with 415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e not found: ID does not exist" containerID="415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e" Mar 12 15:24:01 crc kubenswrapper[4832]: I0312 15:24:01.663958 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e"} err="failed to get container status \"415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e\": rpc error: code = NotFound desc = could not find container \"415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e\": container with ID starting with 415f422a87e2a0abbc0c6160338ebacdfdb82a4a415e0f93d0daa66fbfb0853e not found: ID does not exist" Mar 12 15:24:02 crc kubenswrapper[4832]: I0312 15:24:02.632817 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" path="/var/lib/kubelet/pods/ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7/volumes" Mar 12 15:24:03 crc kubenswrapper[4832]: I0312 15:24:03.530325 4832 generic.go:334] "Generic (PLEG): container finished" podID="b81ca45b-eed0-4034-976f-582b26effa59" containerID="8fa3577d6b5e9fefeda6ea21ecce6c689ec589f6fa1e0f34f81ad78c5f685d0a" exitCode=0 Mar 12 15:24:03 crc kubenswrapper[4832]: I0312 15:24:03.530371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555484-tgpln" event={"ID":"b81ca45b-eed0-4034-976f-582b26effa59","Type":"ContainerDied","Data":"8fa3577d6b5e9fefeda6ea21ecce6c689ec589f6fa1e0f34f81ad78c5f685d0a"} Mar 12 15:24:04 crc kubenswrapper[4832]: I0312 15:24:04.994136 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-tgpln" Mar 12 15:24:05 crc kubenswrapper[4832]: I0312 15:24:05.136838 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v69ph\" (UniqueName: \"kubernetes.io/projected/b81ca45b-eed0-4034-976f-582b26effa59-kube-api-access-v69ph\") pod \"b81ca45b-eed0-4034-976f-582b26effa59\" (UID: \"b81ca45b-eed0-4034-976f-582b26effa59\") " Mar 12 15:24:05 crc kubenswrapper[4832]: I0312 15:24:05.142358 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81ca45b-eed0-4034-976f-582b26effa59-kube-api-access-v69ph" (OuterVolumeSpecName: "kube-api-access-v69ph") pod "b81ca45b-eed0-4034-976f-582b26effa59" (UID: "b81ca45b-eed0-4034-976f-582b26effa59"). InnerVolumeSpecName "kube-api-access-v69ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:24:05 crc kubenswrapper[4832]: I0312 15:24:05.240113 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v69ph\" (UniqueName: \"kubernetes.io/projected/b81ca45b-eed0-4034-976f-582b26effa59-kube-api-access-v69ph\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:05 crc kubenswrapper[4832]: I0312 15:24:05.554606 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555484-tgpln" event={"ID":"b81ca45b-eed0-4034-976f-582b26effa59","Type":"ContainerDied","Data":"7e7926074fb0f282d8233611785f8b004d8828b020be8696f023984be308ba36"} Mar 12 15:24:05 crc kubenswrapper[4832]: I0312 15:24:05.554675 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7926074fb0f282d8233611785f8b004d8828b020be8696f023984be308ba36" Mar 12 15:24:05 crc kubenswrapper[4832]: I0312 15:24:05.554754 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-tgpln" Mar 12 15:24:06 crc kubenswrapper[4832]: I0312 15:24:06.065621 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-5rzlh"] Mar 12 15:24:06 crc kubenswrapper[4832]: I0312 15:24:06.074061 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-5rzlh"] Mar 12 15:24:06 crc kubenswrapper[4832]: I0312 15:24:06.634717 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8580854-c938-48ab-828c-406427eab926" path="/var/lib/kubelet/pods/d8580854-c938-48ab-828c-406427eab926/volumes" Mar 12 15:24:26 crc kubenswrapper[4832]: I0312 15:24:26.314924 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:24:26 crc kubenswrapper[4832]: I0312 15:24:26.315654 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:24:51 crc kubenswrapper[4832]: I0312 15:24:51.343595 4832 scope.go:117] "RemoveContainer" containerID="f3e4beab8740363aa3eb579b9b5db21575d6dc3e5327c5917080f9c5626c1a56" Mar 12 15:24:56 crc kubenswrapper[4832]: I0312 15:24:56.313952 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:24:56 crc kubenswrapper[4832]: I0312 15:24:56.314397 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:25:26 crc kubenswrapper[4832]: I0312 15:25:26.315089 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:25:26 crc kubenswrapper[4832]: I0312 15:25:26.315771 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:25:26 crc kubenswrapper[4832]: I0312 15:25:26.315835 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:25:26 crc kubenswrapper[4832]: I0312 15:25:26.316846 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:25:26 crc kubenswrapper[4832]: I0312 15:25:26.316936 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" gracePeriod=600 Mar 12 15:25:26 crc kubenswrapper[4832]: E0312 15:25:26.449568 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:25:27 crc kubenswrapper[4832]: I0312 15:25:27.458105 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" exitCode=0 Mar 12 15:25:27 crc kubenswrapper[4832]: I0312 15:25:27.458182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146"} Mar 12 15:25:27 crc kubenswrapper[4832]: I0312 15:25:27.458271 4832 scope.go:117] "RemoveContainer" containerID="8770e7b9c1d71d91f69b085e39e97d28156baef5e8fd0f0f7513dd17570fa130" Mar 12 15:25:27 crc kubenswrapper[4832]: I0312 15:25:27.459131 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:25:27 crc kubenswrapper[4832]: E0312 15:25:27.459662 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:25:41 crc kubenswrapper[4832]: I0312 15:25:41.620372 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:25:41 crc kubenswrapper[4832]: E0312 15:25:41.622940 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.345196 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpkg"] Mar 12 15:25:51 crc kubenswrapper[4832]: E0312 15:25:51.346188 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="registry-server" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.346204 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="registry-server" Mar 12 15:25:51 crc kubenswrapper[4832]: E0312 15:25:51.346242 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="extract-content" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.346249 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="extract-content" Mar 12 15:25:51 crc kubenswrapper[4832]: E0312 15:25:51.346265 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81ca45b-eed0-4034-976f-582b26effa59" containerName="oc" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.346273 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81ca45b-eed0-4034-976f-582b26effa59" containerName="oc" Mar 12 15:25:51 crc kubenswrapper[4832]: E0312 15:25:51.346296 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="extract-utilities" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.346305 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="extract-utilities" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.346528 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc9ab39-26e5-40c8-94c8-5b9d2497d4d7" containerName="registry-server" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.346544 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81ca45b-eed0-4034-976f-582b26effa59" containerName="oc" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.348139 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.374369 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpkg"] Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.532683 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsnr\" (UniqueName: \"kubernetes.io/projected/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-kube-api-access-qfsnr\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.532756 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-utilities\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.532788 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-catalog-content\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.634361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsnr\" (UniqueName: \"kubernetes.io/projected/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-kube-api-access-qfsnr\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.634431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-utilities\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.634472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-catalog-content\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.635102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-catalog-content\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.635115 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-utilities\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.656419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsnr\" (UniqueName: \"kubernetes.io/projected/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-kube-api-access-qfsnr\") pod \"redhat-marketplace-cnpkg\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:51 crc kubenswrapper[4832]: I0312 15:25:51.679297 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:25:52 crc kubenswrapper[4832]: I0312 15:25:52.139058 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpkg"] Mar 12 15:25:52 crc kubenswrapper[4832]: I0312 15:25:52.749384 4832 generic.go:334] "Generic (PLEG): container finished" podID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerID="57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab" exitCode=0 Mar 12 15:25:52 crc kubenswrapper[4832]: I0312 15:25:52.749424 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpkg" event={"ID":"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a","Type":"ContainerDied","Data":"57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab"} Mar 12 15:25:52 crc kubenswrapper[4832]: I0312 15:25:52.749763 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpkg" event={"ID":"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a","Type":"ContainerStarted","Data":"966c1c18e1908af9e29bcba95b3001826237238dfb4286384d1661ef5a939db1"} Mar 12 15:25:56 crc kubenswrapper[4832]: I0312 15:25:56.619980 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:25:56 crc kubenswrapper[4832]: E0312 15:25:56.620650 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:25:58 crc kubenswrapper[4832]: I0312 15:25:58.814815 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpkg" event={"ID":"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a","Type":"ContainerStarted","Data":"28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a"} Mar 12 15:25:59 crc kubenswrapper[4832]: I0312 15:25:59.830619 4832 generic.go:334] "Generic (PLEG): container finished" podID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerID="28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a" exitCode=0 Mar 12 15:25:59 crc kubenswrapper[4832]: I0312 15:25:59.831000 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpkg" event={"ID":"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a","Type":"ContainerDied","Data":"28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a"} Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.161210 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555486-q5846"] Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.162956 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-q5846" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.172008 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.173990 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.174203 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.175829 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555486-q5846"] Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.311787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rh7\" (UniqueName: \"kubernetes.io/projected/1433b9c5-e221-4842-8159-342b2359622f-kube-api-access-47rh7\") pod \"auto-csr-approver-29555486-q5846\" (UID: \"1433b9c5-e221-4842-8159-342b2359622f\") " pod="openshift-infra/auto-csr-approver-29555486-q5846" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.413453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47rh7\" (UniqueName: \"kubernetes.io/projected/1433b9c5-e221-4842-8159-342b2359622f-kube-api-access-47rh7\") pod \"auto-csr-approver-29555486-q5846\" (UID: \"1433b9c5-e221-4842-8159-342b2359622f\") " pod="openshift-infra/auto-csr-approver-29555486-q5846" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.437033 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47rh7\" (UniqueName: \"kubernetes.io/projected/1433b9c5-e221-4842-8159-342b2359622f-kube-api-access-47rh7\") pod \"auto-csr-approver-29555486-q5846\" (UID: \"1433b9c5-e221-4842-8159-342b2359622f\") " pod="openshift-infra/auto-csr-approver-29555486-q5846" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.497216 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-q5846" Mar 12 15:26:00 crc kubenswrapper[4832]: I0312 15:26:00.934980 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555486-q5846"] Mar 12 15:26:01 crc kubenswrapper[4832]: I0312 15:26:01.857392 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpkg" event={"ID":"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a","Type":"ContainerStarted","Data":"9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0"} Mar 12 15:26:01 crc kubenswrapper[4832]: I0312 15:26:01.862894 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555486-q5846" event={"ID":"1433b9c5-e221-4842-8159-342b2359622f","Type":"ContainerStarted","Data":"e44dd383aa11a46878e8f88a6063577561c5e89f7623dd0c549415236abe6a10"} Mar 12 15:26:01 crc kubenswrapper[4832]: I0312 15:26:01.880593 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnpkg" podStartSLOduration=2.573858908 podStartE2EDuration="10.880575343s" podCreationTimestamp="2026-03-12 15:25:51 +0000 UTC" firstStartedPulling="2026-03-12 15:25:52.752549173 +0000 UTC m=+2311.396563419" lastFinishedPulling="2026-03-12 15:26:01.059265618 +0000 UTC m=+2319.703279854" observedRunningTime="2026-03-12 15:26:01.877451634 +0000 UTC m=+2320.521465880" watchObservedRunningTime="2026-03-12 15:26:01.880575343 +0000 UTC m=+2320.524589569" Mar 12 15:26:03 crc kubenswrapper[4832]: I0312 15:26:03.881364 4832 generic.go:334] "Generic (PLEG): container finished" podID="1433b9c5-e221-4842-8159-342b2359622f" containerID="629b1f8959958679c48cbf38f798cb0f860957f5363cef618aa5169e35fafaa5" exitCode=0 Mar 12 15:26:03 crc kubenswrapper[4832]: I0312 15:26:03.881583 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555486-q5846" event={"ID":"1433b9c5-e221-4842-8159-342b2359622f","Type":"ContainerDied","Data":"629b1f8959958679c48cbf38f798cb0f860957f5363cef618aa5169e35fafaa5"} Mar 12 15:26:05 crc kubenswrapper[4832]: I0312 15:26:05.245806 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-q5846" Mar 12 15:26:05 crc kubenswrapper[4832]: I0312 15:26:05.412353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47rh7\" (UniqueName: \"kubernetes.io/projected/1433b9c5-e221-4842-8159-342b2359622f-kube-api-access-47rh7\") pod \"1433b9c5-e221-4842-8159-342b2359622f\" (UID: \"1433b9c5-e221-4842-8159-342b2359622f\") " Mar 12 15:26:05 crc kubenswrapper[4832]: I0312 15:26:05.418710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1433b9c5-e221-4842-8159-342b2359622f-kube-api-access-47rh7" (OuterVolumeSpecName: "kube-api-access-47rh7") pod "1433b9c5-e221-4842-8159-342b2359622f" (UID: "1433b9c5-e221-4842-8159-342b2359622f"). InnerVolumeSpecName "kube-api-access-47rh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:26:05 crc kubenswrapper[4832]: I0312 15:26:05.514980 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47rh7\" (UniqueName: \"kubernetes.io/projected/1433b9c5-e221-4842-8159-342b2359622f-kube-api-access-47rh7\") on node \"crc\" DevicePath \"\"" Mar 12 15:26:05 crc kubenswrapper[4832]: I0312 15:26:05.907939 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555486-q5846" event={"ID":"1433b9c5-e221-4842-8159-342b2359622f","Type":"ContainerDied","Data":"e44dd383aa11a46878e8f88a6063577561c5e89f7623dd0c549415236abe6a10"} Mar 12 15:26:05 crc kubenswrapper[4832]: I0312 15:26:05.908003 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44dd383aa11a46878e8f88a6063577561c5e89f7623dd0c549415236abe6a10" Mar 12 15:26:05 crc kubenswrapper[4832]: I0312 15:26:05.908014 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-q5846" Mar 12 15:26:06 crc kubenswrapper[4832]: I0312 15:26:06.331233 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-9j2vd"] Mar 12 15:26:06 crc kubenswrapper[4832]: I0312 15:26:06.341638 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-9j2vd"] Mar 12 15:26:06 crc kubenswrapper[4832]: I0312 15:26:06.633617 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cd9e7b-9280-491d-93d3-1bfa4990dd36" path="/var/lib/kubelet/pods/92cd9e7b-9280-491d-93d3-1bfa4990dd36/volumes" Mar 12 15:26:07 crc kubenswrapper[4832]: I0312 15:26:07.620585 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:26:07 crc kubenswrapper[4832]: E0312 15:26:07.620883 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:26:11 crc kubenswrapper[4832]: I0312 15:26:11.680135 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:26:11 crc kubenswrapper[4832]: I0312 15:26:11.680457 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:26:11 crc kubenswrapper[4832]: I0312 15:26:11.722576 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:26:12 crc kubenswrapper[4832]: I0312 15:26:12.038885 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:26:12 crc kubenswrapper[4832]: I0312 15:26:12.090012 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpkg"] Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.017355 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cnpkg" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="registry-server" containerID="cri-o://9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0" gracePeriod=2 Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.505518 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.586646 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-utilities\") pod \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.586861 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-catalog-content\") pod \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.586944 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsnr\" (UniqueName: \"kubernetes.io/projected/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-kube-api-access-qfsnr\") pod \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\" (UID: \"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a\") " Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.587377 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-utilities" (OuterVolumeSpecName: "utilities") pod "3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" (UID: "3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.592754 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-kube-api-access-qfsnr" (OuterVolumeSpecName: "kube-api-access-qfsnr") pod "3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" (UID: "3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a"). InnerVolumeSpecName "kube-api-access-qfsnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.611594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" (UID: "3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.688579 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.688606 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsnr\" (UniqueName: \"kubernetes.io/projected/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-kube-api-access-qfsnr\") on node \"crc\" DevicePath \"\"" Mar 12 15:26:14 crc kubenswrapper[4832]: I0312 15:26:14.688615 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.030990 4832 generic.go:334] "Generic (PLEG): container finished" podID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerID="9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0" exitCode=0 Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.031035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpkg" event={"ID":"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a","Type":"ContainerDied","Data":"9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0"} Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.031067 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpkg" event={"ID":"3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a","Type":"ContainerDied","Data":"966c1c18e1908af9e29bcba95b3001826237238dfb4286384d1661ef5a939db1"} Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.031087 4832 scope.go:117] "RemoveContainer" containerID="9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.031119 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpkg" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.077398 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpkg"] Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.081099 4832 scope.go:117] "RemoveContainer" containerID="28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.114642 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpkg"] Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.132795 4832 scope.go:117] "RemoveContainer" containerID="57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.175120 4832 scope.go:117] "RemoveContainer" containerID="9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0" Mar 12 15:26:15 crc kubenswrapper[4832]: E0312 15:26:15.175569 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0\": container with ID starting with 9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0 not found: ID does not exist" containerID="9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.175604 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0"} err="failed to get container status \"9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0\": rpc error: code = NotFound desc = could not find container \"9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0\": container with ID starting with 9c9b93c62f9e8e87a6d0e6d53e0a431d1d4b759b49178658526c76401fe53bb0 not found: ID does not exist" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.175630 4832 scope.go:117] "RemoveContainer" containerID="28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a" Mar 12 15:26:15 crc kubenswrapper[4832]: E0312 15:26:15.176230 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a\": container with ID starting with 28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a not found: ID does not exist" containerID="28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.176261 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a"} err="failed to get container status \"28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a\": rpc error: code = NotFound desc = could not find container \"28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a\": container with ID starting with 28e3a5dda75e518db111a70fc77c071e566cd58107d4fc60680f57b7fe11680a not found: ID does not exist" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.176279 4832 scope.go:117] "RemoveContainer" containerID="57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab" Mar 12 15:26:15 crc kubenswrapper[4832]: E0312 15:26:15.181200 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab\": container with ID starting with 57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab not found: ID does not exist" containerID="57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab" Mar 12 15:26:15 crc kubenswrapper[4832]: I0312 15:26:15.181367 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab"} err="failed to get container status \"57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab\": rpc error: code = NotFound desc = could not find container \"57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab\": container with ID starting with 57d3b6aa8e2dab6b05ccb313d2c5bd90b1d2dd4f6848d0585593af50fcede6ab not found: ID does not exist" Mar 12 15:26:16 crc kubenswrapper[4832]: I0312 15:26:16.635222 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" path="/var/lib/kubelet/pods/3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a/volumes" Mar 12 15:26:21 crc kubenswrapper[4832]: I0312 15:26:21.620105 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:26:21 crc kubenswrapper[4832]: E0312 15:26:21.621130 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:26:34 crc kubenswrapper[4832]: I0312 15:26:34.620401 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:26:34 crc kubenswrapper[4832]: E0312 15:26:34.623130 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:26:47 crc kubenswrapper[4832]: I0312 15:26:47.619863 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:26:47 crc kubenswrapper[4832]: E0312 15:26:47.620909 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:26:51 crc kubenswrapper[4832]: I0312 15:26:51.471958 4832 scope.go:117] "RemoveContainer" containerID="c8d40d98c0122a764d4c20c98f3a923d611ba9d7ef2bd04c3f35bd991f930cb0" Mar 12 15:26:59 crc kubenswrapper[4832]: I0312 15:26:59.510611 4832 generic.go:334] "Generic (PLEG): container finished" podID="a043f4e6-f64c-4a26-ac04-93005bcc77d0" containerID="37e4dbbf4813b5c122820d52fffdaa28b20895c1251357ee7a78e68a5a1fee69" exitCode=0 Mar 12 15:26:59 crc kubenswrapper[4832]: I0312 15:26:59.510711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" event={"ID":"a043f4e6-f64c-4a26-ac04-93005bcc77d0","Type":"ContainerDied","Data":"37e4dbbf4813b5c122820d52fffdaa28b20895c1251357ee7a78e68a5a1fee69"} Mar 12 15:27:00 crc kubenswrapper[4832]: I0312 15:27:00.620120 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:27:00 crc kubenswrapper[4832]: E0312 15:27:00.620733 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:27:00 crc kubenswrapper[4832]: I0312 15:27:00.979478 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.135897 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qdff\" (UniqueName: \"kubernetes.io/projected/a043f4e6-f64c-4a26-ac04-93005bcc77d0-kube-api-access-5qdff\") pod \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.135950 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-combined-ca-bundle\") pod \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.136074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-ssh-key-openstack-edpm-ipam\") pod \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.136238 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-inventory\") pod \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.136281 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-secret-0\") pod \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\" (UID: \"a043f4e6-f64c-4a26-ac04-93005bcc77d0\") " Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.141769 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a043f4e6-f64c-4a26-ac04-93005bcc77d0" (UID: "a043f4e6-f64c-4a26-ac04-93005bcc77d0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.145279 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a043f4e6-f64c-4a26-ac04-93005bcc77d0-kube-api-access-5qdff" (OuterVolumeSpecName: "kube-api-access-5qdff") pod "a043f4e6-f64c-4a26-ac04-93005bcc77d0" (UID: "a043f4e6-f64c-4a26-ac04-93005bcc77d0"). InnerVolumeSpecName "kube-api-access-5qdff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.162961 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-inventory" (OuterVolumeSpecName: "inventory") pod "a043f4e6-f64c-4a26-ac04-93005bcc77d0" (UID: "a043f4e6-f64c-4a26-ac04-93005bcc77d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.169493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a043f4e6-f64c-4a26-ac04-93005bcc77d0" (UID: "a043f4e6-f64c-4a26-ac04-93005bcc77d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.175890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a043f4e6-f64c-4a26-ac04-93005bcc77d0" (UID: "a043f4e6-f64c-4a26-ac04-93005bcc77d0"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.238716 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.238765 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.238787 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qdff\" (UniqueName: \"kubernetes.io/projected/a043f4e6-f64c-4a26-ac04-93005bcc77d0-kube-api-access-5qdff\") on node \"crc\" DevicePath \"\"" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.238805 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.238824 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a043f4e6-f64c-4a26-ac04-93005bcc77d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.531708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" event={"ID":"a043f4e6-f64c-4a26-ac04-93005bcc77d0","Type":"ContainerDied","Data":"ed3ef45529687b2a3f91d24bee38a99d69d7e7956ece787342fcd61d402385af"} Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.531765 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3ef45529687b2a3f91d24bee38a99d69d7e7956ece787342fcd61d402385af" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.531830 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wmghq" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.638424 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n"] Mar 12 15:27:01 crc kubenswrapper[4832]: E0312 15:27:01.638988 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="extract-utilities" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639017 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="extract-utilities" Mar 12 15:27:01 crc kubenswrapper[4832]: E0312 15:27:01.639045 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="extract-content" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639054 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="extract-content" Mar 12 15:27:01 crc kubenswrapper[4832]: E0312 15:27:01.639071 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="registry-server" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639077 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="registry-server" Mar 12 15:27:01 crc kubenswrapper[4832]: E0312 15:27:01.639090 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1433b9c5-e221-4842-8159-342b2359622f" containerName="oc" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639097 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1433b9c5-e221-4842-8159-342b2359622f" containerName="oc" Mar 12 15:27:01 crc kubenswrapper[4832]: E0312 15:27:01.639107 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a043f4e6-f64c-4a26-ac04-93005bcc77d0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639114 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a043f4e6-f64c-4a26-ac04-93005bcc77d0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639357 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1433b9c5-e221-4842-8159-342b2359622f" containerName="oc" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639372 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7ef5a2-c6a4-4900-89ab-8c8f64c2924a" containerName="registry-server" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.639383 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a043f4e6-f64c-4a26-ac04-93005bcc77d0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.640016 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.642465 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.643209 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.643213 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.643275 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.643711 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.643968 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.643981 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649269 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649316 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649547 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649702 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649839 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlq9n\" (UniqueName: \"kubernetes.io/projected/46a3ec68-bc9e-4758-ab38-6d6b776ad178-kube-api-access-mlq9n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.649975 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.650111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.663621 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n"] Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.751293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.751446 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.751494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.751961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.752003 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.752859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.752982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.753018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.753051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlq9n\" (UniqueName: \"kubernetes.io/projected/46a3ec68-bc9e-4758-ab38-6d6b776ad178-kube-api-access-mlq9n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.753083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.753129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.753998 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.756781 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.756987 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.759634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.759623 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.760241 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.761740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.761753 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.768198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.768766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.771463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlq9n\" (UniqueName: \"kubernetes.io/projected/46a3ec68-bc9e-4758-ab38-6d6b776ad178-kube-api-access-mlq9n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z664n\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:01 crc kubenswrapper[4832]: I0312 15:27:01.962918 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:27:02 crc kubenswrapper[4832]: I0312 15:27:02.567420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n"] Mar 12 15:27:02 crc kubenswrapper[4832]: I0312 15:27:02.577275 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:27:03 crc kubenswrapper[4832]: I0312 15:27:03.549359 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" event={"ID":"46a3ec68-bc9e-4758-ab38-6d6b776ad178","Type":"ContainerStarted","Data":"ac717fd70f553d7ced1ed2940bcfc39a249042e7167b5c5f40f4a3d430dfb6ce"} Mar 12 15:27:03 crc kubenswrapper[4832]: I0312 15:27:03.549640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" event={"ID":"46a3ec68-bc9e-4758-ab38-6d6b776ad178","Type":"ContainerStarted","Data":"da224c5b21d4941d5dd8de73921e0271f3fbbfc9ef9afe7e6cd6b5645a8280fa"} Mar 12 15:27:03 crc kubenswrapper[4832]: I0312 15:27:03.582460 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" podStartSLOduration=2.109080006 podStartE2EDuration="2.582438581s" podCreationTimestamp="2026-03-12 15:27:01 +0000 UTC" firstStartedPulling="2026-03-12 15:27:02.576840152 +0000 UTC m=+2381.220854418" lastFinishedPulling="2026-03-12 15:27:03.050198757 +0000 UTC m=+2381.694212993" observedRunningTime="2026-03-12 15:27:03.569689798 +0000 UTC m=+2382.213704034" watchObservedRunningTime="2026-03-12 15:27:03.582438581 +0000 UTC m=+2382.226452817" Mar 12 15:27:15 crc kubenswrapper[4832]: I0312 15:27:15.620291 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:27:15 crc kubenswrapper[4832]: E0312 15:27:15.621328 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:27:28 crc kubenswrapper[4832]: I0312 15:27:28.619986 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:27:28 crc kubenswrapper[4832]: E0312 15:27:28.620780 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:27:39 crc kubenswrapper[4832]: I0312 15:27:39.619813 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:27:39 crc kubenswrapper[4832]: E0312 15:27:39.620528 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:27:53 crc kubenswrapper[4832]: I0312 15:27:53.620745 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:27:53 crc kubenswrapper[4832]: E0312 15:27:53.621620 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.161684 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555488-2pzg4"] Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.163562 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.166058 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.166916 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.167467 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.188848 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555488-2pzg4"] Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.191072 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64s5j\" (UniqueName: \"kubernetes.io/projected/c29c2304-219e-4871-8dd0-232c9eaa6500-kube-api-access-64s5j\") pod \"auto-csr-approver-29555488-2pzg4\" (UID: \"c29c2304-219e-4871-8dd0-232c9eaa6500\") " pod="openshift-infra/auto-csr-approver-29555488-2pzg4" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.293456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64s5j\" (UniqueName: \"kubernetes.io/projected/c29c2304-219e-4871-8dd0-232c9eaa6500-kube-api-access-64s5j\") pod \"auto-csr-approver-29555488-2pzg4\" (UID: \"c29c2304-219e-4871-8dd0-232c9eaa6500\") " pod="openshift-infra/auto-csr-approver-29555488-2pzg4" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.341394 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64s5j\" (UniqueName: \"kubernetes.io/projected/c29c2304-219e-4871-8dd0-232c9eaa6500-kube-api-access-64s5j\") pod \"auto-csr-approver-29555488-2pzg4\" (UID: \"c29c2304-219e-4871-8dd0-232c9eaa6500\") " pod="openshift-infra/auto-csr-approver-29555488-2pzg4" Mar 12 15:28:00 crc kubenswrapper[4832]: I0312 15:28:00.501207 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" Mar 12 15:28:01 crc kubenswrapper[4832]: I0312 15:28:01.000766 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555488-2pzg4"] Mar 12 15:28:01 crc kubenswrapper[4832]: I0312 15:28:01.550119 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" event={"ID":"c29c2304-219e-4871-8dd0-232c9eaa6500","Type":"ContainerStarted","Data":"9e975d712958d094ec50783875f4a514ee17b209f36793a3eca55e5bf7ba35f5"} Mar 12 15:28:02 crc kubenswrapper[4832]: I0312 15:28:02.563085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" event={"ID":"c29c2304-219e-4871-8dd0-232c9eaa6500","Type":"ContainerStarted","Data":"896580ed7a64ef1366a062f8012304755c7aba364cc126d03e8f5bbd3015bd45"} Mar 12 15:28:02 crc kubenswrapper[4832]: I0312 15:28:02.579394 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" podStartSLOduration=1.505801865 podStartE2EDuration="2.579367322s" podCreationTimestamp="2026-03-12 15:28:00 +0000 UTC" firstStartedPulling="2026-03-12 15:28:01.008670633 +0000 UTC m=+2439.652684879" lastFinishedPulling="2026-03-12 15:28:02.0822361 +0000 UTC m=+2440.726250336" observedRunningTime="2026-03-12 15:28:02.579215667 +0000 UTC m=+2441.223229913" watchObservedRunningTime="2026-03-12 15:28:02.579367322 +0000 UTC m=+2441.223381548" Mar 12 15:28:03 crc kubenswrapper[4832]: I0312 15:28:03.574009 4832 generic.go:334] "Generic (PLEG): container finished" podID="c29c2304-219e-4871-8dd0-232c9eaa6500" containerID="896580ed7a64ef1366a062f8012304755c7aba364cc126d03e8f5bbd3015bd45" exitCode=0 Mar 12 15:28:03 crc kubenswrapper[4832]: I0312 15:28:03.574060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" event={"ID":"c29c2304-219e-4871-8dd0-232c9eaa6500","Type":"ContainerDied","Data":"896580ed7a64ef1366a062f8012304755c7aba364cc126d03e8f5bbd3015bd45"} Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.044717 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.099125 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64s5j\" (UniqueName: \"kubernetes.io/projected/c29c2304-219e-4871-8dd0-232c9eaa6500-kube-api-access-64s5j\") pod \"c29c2304-219e-4871-8dd0-232c9eaa6500\" (UID: \"c29c2304-219e-4871-8dd0-232c9eaa6500\") " Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.106720 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29c2304-219e-4871-8dd0-232c9eaa6500-kube-api-access-64s5j" (OuterVolumeSpecName: "kube-api-access-64s5j") pod "c29c2304-219e-4871-8dd0-232c9eaa6500" (UID: "c29c2304-219e-4871-8dd0-232c9eaa6500"). InnerVolumeSpecName "kube-api-access-64s5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.201749 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64s5j\" (UniqueName: \"kubernetes.io/projected/c29c2304-219e-4871-8dd0-232c9eaa6500-kube-api-access-64s5j\") on node \"crc\" DevicePath \"\"" Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.597006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" event={"ID":"c29c2304-219e-4871-8dd0-232c9eaa6500","Type":"ContainerDied","Data":"9e975d712958d094ec50783875f4a514ee17b209f36793a3eca55e5bf7ba35f5"} Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.597089 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e975d712958d094ec50783875f4a514ee17b209f36793a3eca55e5bf7ba35f5" Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.597096 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-2pzg4" Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.653024 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-d68mx"] Mar 12 15:28:05 crc kubenswrapper[4832]: I0312 15:28:05.661027 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-d68mx"] Mar 12 15:28:06 crc kubenswrapper[4832]: I0312 15:28:06.629428 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9" path="/var/lib/kubelet/pods/bc14c7b9-5c50-4f8f-b601-2721bdb7d9b9/volumes" Mar 12 15:28:08 crc kubenswrapper[4832]: I0312 15:28:08.620126 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:28:08 crc kubenswrapper[4832]: E0312 15:28:08.620710 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:28:19 crc kubenswrapper[4832]: I0312 15:28:19.620868 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:28:19 crc kubenswrapper[4832]: E0312 15:28:19.622098 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:28:32 crc kubenswrapper[4832]: I0312 15:28:32.628070 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:28:32 crc kubenswrapper[4832]: E0312 15:28:32.628970 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:28:45 crc kubenswrapper[4832]: I0312 15:28:45.620452 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:28:45 crc kubenswrapper[4832]: E0312 15:28:45.621238 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:28:51 crc kubenswrapper[4832]: I0312 15:28:51.587201 4832 scope.go:117] "RemoveContainer" containerID="b92d28140951945ac31129c0d1508ede70c0818a0fec2e487563119129b0a1ad" Mar 12 15:28:59 crc kubenswrapper[4832]: I0312 15:28:59.620817 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:28:59 crc kubenswrapper[4832]: E0312 15:28:59.621682 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:29:14 crc kubenswrapper[4832]: I0312 15:29:14.621376 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:29:14 crc kubenswrapper[4832]: E0312 15:29:14.623697 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:29:26 crc kubenswrapper[4832]: I0312 15:29:26.620204 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:29:26 crc kubenswrapper[4832]: E0312 15:29:26.621059 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:29:32 crc kubenswrapper[4832]: I0312 15:29:32.523328 4832 generic.go:334] "Generic (PLEG): container finished" podID="46a3ec68-bc9e-4758-ab38-6d6b776ad178" containerID="ac717fd70f553d7ced1ed2940bcfc39a249042e7167b5c5f40f4a3d430dfb6ce" exitCode=0 Mar 12 15:29:32 crc kubenswrapper[4832]: I0312 15:29:32.523480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" event={"ID":"46a3ec68-bc9e-4758-ab38-6d6b776ad178","Type":"ContainerDied","Data":"ac717fd70f553d7ced1ed2940bcfc39a249042e7167b5c5f40f4a3d430dfb6ce"} Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.040673 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.068089 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-3\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.068170 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-inventory\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.068204 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-combined-ca-bundle\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.068254 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-1\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.068278 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-ssh-key-openstack-edpm-ipam\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.075854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.097680 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.098136 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-inventory" (OuterVolumeSpecName: "inventory") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.100406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.106437 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.171338 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-2\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.171421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-1\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.171463 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlq9n\" (UniqueName: \"kubernetes.io/projected/46a3ec68-bc9e-4758-ab38-6d6b776ad178-kube-api-access-mlq9n\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.171539 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-0\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.171609 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-0\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.171653 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-extra-config-0\") pod \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\" (UID: \"46a3ec68-bc9e-4758-ab38-6d6b776ad178\") " Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.172433 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.172469 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.172491 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.172526 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.172543 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.174426 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a3ec68-bc9e-4758-ab38-6d6b776ad178-kube-api-access-mlq9n" (OuterVolumeSpecName: "kube-api-access-mlq9n") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "kube-api-access-mlq9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.196727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.203620 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.203819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.211530 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.219772 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "46a3ec68-bc9e-4758-ab38-6d6b776ad178" (UID: "46a3ec68-bc9e-4758-ab38-6d6b776ad178"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.274311 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.274356 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.274370 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlq9n\" (UniqueName: \"kubernetes.io/projected/46a3ec68-bc9e-4758-ab38-6d6b776ad178-kube-api-access-mlq9n\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.274383 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.274396 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.274410 4832 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46a3ec68-bc9e-4758-ab38-6d6b776ad178-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.545217 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" event={"ID":"46a3ec68-bc9e-4758-ab38-6d6b776ad178","Type":"ContainerDied","Data":"da224c5b21d4941d5dd8de73921e0271f3fbbfc9ef9afe7e6cd6b5645a8280fa"} Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.545275 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da224c5b21d4941d5dd8de73921e0271f3fbbfc9ef9afe7e6cd6b5645a8280fa" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.545297 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z664n" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.675686 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d"] Mar 12 15:29:34 crc kubenswrapper[4832]: E0312 15:29:34.676296 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a3ec68-bc9e-4758-ab38-6d6b776ad178" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.676331 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a3ec68-bc9e-4758-ab38-6d6b776ad178" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 15:29:34 crc kubenswrapper[4832]: E0312 15:29:34.676344 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29c2304-219e-4871-8dd0-232c9eaa6500" containerName="oc" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.676350 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29c2304-219e-4871-8dd0-232c9eaa6500" containerName="oc" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.676732 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29c2304-219e-4871-8dd0-232c9eaa6500" containerName="oc" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.676759 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a3ec68-bc9e-4758-ab38-6d6b776ad178" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.677388 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.679895 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.680737 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.680878 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.683250 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.684455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.684719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.684821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.684866 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.685080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx5dg\" (UniqueName: \"kubernetes.io/projected/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-kube-api-access-fx5dg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.685233 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.685287 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.687120 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6npm" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.689674 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d"] Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.787233 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx5dg\" (UniqueName: \"kubernetes.io/projected/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-kube-api-access-fx5dg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.787397 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.787444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.787562 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.787648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.787704 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.787741 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.791977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.793755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.794074 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.794538 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.793681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.794867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:34 crc kubenswrapper[4832]: I0312 15:29:34.805695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx5dg\" (UniqueName: \"kubernetes.io/projected/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-kube-api-access-fx5dg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ft68d\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:35 crc kubenswrapper[4832]: I0312 15:29:35.003122 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:29:35 crc kubenswrapper[4832]: I0312 15:29:35.578003 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d"] Mar 12 15:29:36 crc kubenswrapper[4832]: I0312 15:29:36.560566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" event={"ID":"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e","Type":"ContainerStarted","Data":"229b60ec84f665c9371a0f6f3f1205bd57deb71b44a55c46c95e75df3dac45e7"} Mar 12 15:29:36 crc kubenswrapper[4832]: I0312 15:29:36.560952 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" event={"ID":"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e","Type":"ContainerStarted","Data":"80c07fc4580ea42c6daa66f74ce9964ed273d388d045fb5bc71dbbdac1328eeb"} Mar 12 15:29:36 crc kubenswrapper[4832]: I0312 15:29:36.581352 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" podStartSLOduration=2.067035469 podStartE2EDuration="2.58132634s" podCreationTimestamp="2026-03-12 15:29:34 +0000 UTC" firstStartedPulling="2026-03-12 15:29:35.581392425 +0000 UTC m=+2534.225406651" lastFinishedPulling="2026-03-12 15:29:36.095683286 +0000 UTC m=+2534.739697522" observedRunningTime="2026-03-12 15:29:36.581252298 +0000 UTC m=+2535.225266534" watchObservedRunningTime="2026-03-12 15:29:36.58132634 +0000 UTC m=+2535.225340576" Mar 12 15:29:40 crc kubenswrapper[4832]: I0312 15:29:40.620556 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:29:40 crc kubenswrapper[4832]: E0312 15:29:40.621315 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:29:53 crc kubenswrapper[4832]: I0312 15:29:53.619556 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:29:53 crc kubenswrapper[4832]: E0312 15:29:53.620420 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.163341 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555490-kqvl8"] Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.165440 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555490-kqvl8" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.167955 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.169844 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.170345 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.173790 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls"] Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.175259 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.179151 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.179668 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.212296 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555490-kqvl8"] Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.226744 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls"] Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.324687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b11e42a4-3313-4196-951a-819b016cd002-secret-volume\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.324766 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vpq\" (UniqueName: \"kubernetes.io/projected/b11e42a4-3313-4196-951a-819b016cd002-kube-api-access-j7vpq\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.324873 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b11e42a4-3313-4196-951a-819b016cd002-config-volume\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.325073 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ql9r\" (UniqueName: \"kubernetes.io/projected/5d11ff7f-c51b-4f85-ae36-de90b4fef0e8-kube-api-access-5ql9r\") pod \"auto-csr-approver-29555490-kqvl8\" (UID: \"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8\") " pod="openshift-infra/auto-csr-approver-29555490-kqvl8" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.426470 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b11e42a4-3313-4196-951a-819b016cd002-secret-volume\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.426561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7vpq\" (UniqueName: \"kubernetes.io/projected/b11e42a4-3313-4196-951a-819b016cd002-kube-api-access-j7vpq\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.426629 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b11e42a4-3313-4196-951a-819b016cd002-config-volume\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.426796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ql9r\" (UniqueName: \"kubernetes.io/projected/5d11ff7f-c51b-4f85-ae36-de90b4fef0e8-kube-api-access-5ql9r\") pod \"auto-csr-approver-29555490-kqvl8\" (UID: \"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8\") " pod="openshift-infra/auto-csr-approver-29555490-kqvl8" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.428162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b11e42a4-3313-4196-951a-819b016cd002-config-volume\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.434442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b11e42a4-3313-4196-951a-819b016cd002-secret-volume\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.453172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ql9r\" (UniqueName: \"kubernetes.io/projected/5d11ff7f-c51b-4f85-ae36-de90b4fef0e8-kube-api-access-5ql9r\") pod \"auto-csr-approver-29555490-kqvl8\" (UID: \"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8\") " pod="openshift-infra/auto-csr-approver-29555490-kqvl8" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.455782 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7vpq\" (UniqueName: \"kubernetes.io/projected/b11e42a4-3313-4196-951a-819b016cd002-kube-api-access-j7vpq\") pod \"collect-profiles-29555490-hw7ls\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.523497 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555490-kqvl8" Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.527378 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:00 crc kubenswrapper[4832]: W0312 15:30:00.990641 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d11ff7f_c51b_4f85_ae36_de90b4fef0e8.slice/crio-72528b396f17e821b8870e5ffcf0f800a0a1f89114d6e2ee1b0d7c77c8511a72 WatchSource:0}: Error finding container 72528b396f17e821b8870e5ffcf0f800a0a1f89114d6e2ee1b0d7c77c8511a72: Status 404 returned error can't find the container with id 72528b396f17e821b8870e5ffcf0f800a0a1f89114d6e2ee1b0d7c77c8511a72 Mar 12 15:30:00 crc kubenswrapper[4832]: I0312 15:30:00.990766 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555490-kqvl8"] Mar 12 15:30:01 crc kubenswrapper[4832]: W0312 15:30:01.045813 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb11e42a4_3313_4196_951a_819b016cd002.slice/crio-e5398693698e2668b710142034526c317128cb1e7858e17eda01b9b4f48c7f24 WatchSource:0}: Error finding container e5398693698e2668b710142034526c317128cb1e7858e17eda01b9b4f48c7f24: Status 404 returned error can't find the container with id e5398693698e2668b710142034526c317128cb1e7858e17eda01b9b4f48c7f24 Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.048394 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls"] Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.105056 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vn9tx"] Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.117379 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vn9tx"] Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.117469 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.265549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-catalog-content\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.265585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9795c\" (UniqueName: \"kubernetes.io/projected/ca6dcfa6-e957-44ad-95b8-c389f0907efc-kube-api-access-9795c\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.265638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-utilities\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.366983 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-catalog-content\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.367339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9795c\" (UniqueName: \"kubernetes.io/projected/ca6dcfa6-e957-44ad-95b8-c389f0907efc-kube-api-access-9795c\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.367413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-utilities\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.367890 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-catalog-content\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.368055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-utilities\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.388834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9795c\" (UniqueName: \"kubernetes.io/projected/ca6dcfa6-e957-44ad-95b8-c389f0907efc-kube-api-access-9795c\") pod \"community-operators-vn9tx\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.457796 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.847570 4832 generic.go:334] "Generic (PLEG): container finished" podID="b11e42a4-3313-4196-951a-819b016cd002" containerID="192e1b4d4bed5b967eb2c3fc28e2c3192269caf4685e1841b3a558a57d37a94a" exitCode=0 Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.847622 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" event={"ID":"b11e42a4-3313-4196-951a-819b016cd002","Type":"ContainerDied","Data":"192e1b4d4bed5b967eb2c3fc28e2c3192269caf4685e1841b3a558a57d37a94a"} Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.847672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" event={"ID":"b11e42a4-3313-4196-951a-819b016cd002","Type":"ContainerStarted","Data":"e5398693698e2668b710142034526c317128cb1e7858e17eda01b9b4f48c7f24"} Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.849076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555490-kqvl8" event={"ID":"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8","Type":"ContainerStarted","Data":"72528b396f17e821b8870e5ffcf0f800a0a1f89114d6e2ee1b0d7c77c8511a72"} Mar 12 15:30:01 crc kubenswrapper[4832]: I0312 15:30:01.985118 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vn9tx"] Mar 12 15:30:02 crc kubenswrapper[4832]: W0312 15:30:02.003190 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca6dcfa6_e957_44ad_95b8_c389f0907efc.slice/crio-5bbf590cc702a450fde71d9c7693b4bef6e08c4b49901726b7da82058f0ce870 WatchSource:0}: Error finding container 5bbf590cc702a450fde71d9c7693b4bef6e08c4b49901726b7da82058f0ce870: Status 404 returned error can't find the container with id 5bbf590cc702a450fde71d9c7693b4bef6e08c4b49901726b7da82058f0ce870 Mar 12 15:30:02 crc kubenswrapper[4832]: E0312 15:30:02.457040 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca6dcfa6_e957_44ad_95b8_c389f0907efc.slice/crio-conmon-dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:30:02 crc kubenswrapper[4832]: I0312 15:30:02.864661 4832 generic.go:334] "Generic (PLEG): container finished" podID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerID="dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e" exitCode=0 Mar 12 15:30:02 crc kubenswrapper[4832]: I0312 15:30:02.864731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vn9tx" event={"ID":"ca6dcfa6-e957-44ad-95b8-c389f0907efc","Type":"ContainerDied","Data":"dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e"} Mar 12 15:30:02 crc kubenswrapper[4832]: I0312 15:30:02.865291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vn9tx" event={"ID":"ca6dcfa6-e957-44ad-95b8-c389f0907efc","Type":"ContainerStarted","Data":"5bbf590cc702a450fde71d9c7693b4bef6e08c4b49901726b7da82058f0ce870"} Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.284341 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.407425 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b11e42a4-3313-4196-951a-819b016cd002-secret-volume\") pod \"b11e42a4-3313-4196-951a-819b016cd002\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.407500 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7vpq\" (UniqueName: \"kubernetes.io/projected/b11e42a4-3313-4196-951a-819b016cd002-kube-api-access-j7vpq\") pod \"b11e42a4-3313-4196-951a-819b016cd002\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.407584 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b11e42a4-3313-4196-951a-819b016cd002-config-volume\") pod \"b11e42a4-3313-4196-951a-819b016cd002\" (UID: \"b11e42a4-3313-4196-951a-819b016cd002\") " Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.408838 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11e42a4-3313-4196-951a-819b016cd002-config-volume" (OuterVolumeSpecName: "config-volume") pod "b11e42a4-3313-4196-951a-819b016cd002" (UID: "b11e42a4-3313-4196-951a-819b016cd002"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.414408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11e42a4-3313-4196-951a-819b016cd002-kube-api-access-j7vpq" (OuterVolumeSpecName: "kube-api-access-j7vpq") pod "b11e42a4-3313-4196-951a-819b016cd002" (UID: "b11e42a4-3313-4196-951a-819b016cd002"). InnerVolumeSpecName "kube-api-access-j7vpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.414732 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11e42a4-3313-4196-951a-819b016cd002-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b11e42a4-3313-4196-951a-819b016cd002" (UID: "b11e42a4-3313-4196-951a-819b016cd002"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.510856 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b11e42a4-3313-4196-951a-819b016cd002-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.510909 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7vpq\" (UniqueName: \"kubernetes.io/projected/b11e42a4-3313-4196-951a-819b016cd002-kube-api-access-j7vpq\") on node \"crc\" DevicePath \"\"" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.510923 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b11e42a4-3313-4196-951a-819b016cd002-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.878604 4832 generic.go:334] "Generic (PLEG): container finished" podID="5d11ff7f-c51b-4f85-ae36-de90b4fef0e8" containerID="df0ca1d8216c15d08ae44c1abc5414e6e28c9bcbc347057d2e674a9c7e596a65" exitCode=0 Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.878701 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555490-kqvl8" event={"ID":"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8","Type":"ContainerDied","Data":"df0ca1d8216c15d08ae44c1abc5414e6e28c9bcbc347057d2e674a9c7e596a65"} Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.880854 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" event={"ID":"b11e42a4-3313-4196-951a-819b016cd002","Type":"ContainerDied","Data":"e5398693698e2668b710142034526c317128cb1e7858e17eda01b9b4f48c7f24"} Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.880925 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5398693698e2668b710142034526c317128cb1e7858e17eda01b9b4f48c7f24" Mar 12 15:30:03 crc kubenswrapper[4832]: I0312 15:30:03.880955 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555490-hw7ls" Mar 12 15:30:04 crc kubenswrapper[4832]: I0312 15:30:04.364626 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk"] Mar 12 15:30:04 crc kubenswrapper[4832]: I0312 15:30:04.372597 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-8xpmk"] Mar 12 15:30:04 crc kubenswrapper[4832]: I0312 15:30:04.633693 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf32718-d22d-4e55-b158-43a02ef6a67f" path="/var/lib/kubelet/pods/0bf32718-d22d-4e55-b158-43a02ef6a67f/volumes" Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.307322 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555490-kqvl8" Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.454258 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ql9r\" (UniqueName: \"kubernetes.io/projected/5d11ff7f-c51b-4f85-ae36-de90b4fef0e8-kube-api-access-5ql9r\") pod \"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8\" (UID: \"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8\") " Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.461741 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d11ff7f-c51b-4f85-ae36-de90b4fef0e8-kube-api-access-5ql9r" (OuterVolumeSpecName: "kube-api-access-5ql9r") pod "5d11ff7f-c51b-4f85-ae36-de90b4fef0e8" (UID: "5d11ff7f-c51b-4f85-ae36-de90b4fef0e8"). InnerVolumeSpecName "kube-api-access-5ql9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.556421 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ql9r\" (UniqueName: \"kubernetes.io/projected/5d11ff7f-c51b-4f85-ae36-de90b4fef0e8-kube-api-access-5ql9r\") on node \"crc\" DevicePath \"\"" Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.904078 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555490-kqvl8" Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.904070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555490-kqvl8" event={"ID":"5d11ff7f-c51b-4f85-ae36-de90b4fef0e8","Type":"ContainerDied","Data":"72528b396f17e821b8870e5ffcf0f800a0a1f89114d6e2ee1b0d7c77c8511a72"} Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.904210 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72528b396f17e821b8870e5ffcf0f800a0a1f89114d6e2ee1b0d7c77c8511a72" Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.906226 4832 generic.go:334] "Generic (PLEG): container finished" podID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerID="fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7" exitCode=0 Mar 12 15:30:05 crc kubenswrapper[4832]: I0312 15:30:05.906265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vn9tx" event={"ID":"ca6dcfa6-e957-44ad-95b8-c389f0907efc","Type":"ContainerDied","Data":"fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7"} Mar 12 15:30:06 crc kubenswrapper[4832]: I0312 15:30:06.375738 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555484-tgpln"] Mar 12 15:30:06 crc kubenswrapper[4832]: I0312 15:30:06.385400 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555484-tgpln"] Mar 12 15:30:06 crc kubenswrapper[4832]: I0312 15:30:06.620239 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:30:06 crc kubenswrapper[4832]: E0312 15:30:06.620669 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:30:06 crc kubenswrapper[4832]: I0312 15:30:06.636543 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81ca45b-eed0-4034-976f-582b26effa59" path="/var/lib/kubelet/pods/b81ca45b-eed0-4034-976f-582b26effa59/volumes" Mar 12 15:30:06 crc kubenswrapper[4832]: I0312 15:30:06.916488 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vn9tx" event={"ID":"ca6dcfa6-e957-44ad-95b8-c389f0907efc","Type":"ContainerStarted","Data":"81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9"} Mar 12 15:30:06 crc kubenswrapper[4832]: I0312 15:30:06.933128 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vn9tx" podStartSLOduration=2.511707338 podStartE2EDuration="5.933091241s" podCreationTimestamp="2026-03-12 15:30:01 +0000 UTC" firstStartedPulling="2026-03-12 15:30:02.867412017 +0000 UTC m=+2561.511426253" lastFinishedPulling="2026-03-12 15:30:06.28879593 +0000 UTC m=+2564.932810156" observedRunningTime="2026-03-12 15:30:06.931785813 +0000 UTC m=+2565.575800089" watchObservedRunningTime="2026-03-12 15:30:06.933091241 +0000 UTC m=+2565.577105467" Mar 12 15:30:11 crc kubenswrapper[4832]: I0312 15:30:11.458058 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:11 crc kubenswrapper[4832]: I0312 15:30:11.458622 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:11 crc kubenswrapper[4832]: I0312 15:30:11.524489 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:12 crc kubenswrapper[4832]: I0312 15:30:12.021309 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:12 crc kubenswrapper[4832]: I0312 15:30:12.077472 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vn9tx"] Mar 12 15:30:13 crc kubenswrapper[4832]: I0312 15:30:13.996434 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vn9tx" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="registry-server" containerID="cri-o://81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9" gracePeriod=2 Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.475040 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.545768 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9795c\" (UniqueName: \"kubernetes.io/projected/ca6dcfa6-e957-44ad-95b8-c389f0907efc-kube-api-access-9795c\") pod \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.545851 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-utilities\") pod \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.546032 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-catalog-content\") pod \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\" (UID: \"ca6dcfa6-e957-44ad-95b8-c389f0907efc\") " Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.547006 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-utilities" (OuterVolumeSpecName: "utilities") pod "ca6dcfa6-e957-44ad-95b8-c389f0907efc" (UID: "ca6dcfa6-e957-44ad-95b8-c389f0907efc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.548421 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.559745 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca6dcfa6-e957-44ad-95b8-c389f0907efc-kube-api-access-9795c" (OuterVolumeSpecName: "kube-api-access-9795c") pod "ca6dcfa6-e957-44ad-95b8-c389f0907efc" (UID: "ca6dcfa6-e957-44ad-95b8-c389f0907efc"). InnerVolumeSpecName "kube-api-access-9795c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.609431 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca6dcfa6-e957-44ad-95b8-c389f0907efc" (UID: "ca6dcfa6-e957-44ad-95b8-c389f0907efc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.649932 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9795c\" (UniqueName: \"kubernetes.io/projected/ca6dcfa6-e957-44ad-95b8-c389f0907efc-kube-api-access-9795c\") on node \"crc\" DevicePath \"\"" Mar 12 15:30:14 crc kubenswrapper[4832]: I0312 15:30:14.650000 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6dcfa6-e957-44ad-95b8-c389f0907efc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.010386 4832 generic.go:334] "Generic (PLEG): container finished" podID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerID="81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9" exitCode=0 Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.010455 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vn9tx" event={"ID":"ca6dcfa6-e957-44ad-95b8-c389f0907efc","Type":"ContainerDied","Data":"81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9"} Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.010476 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vn9tx" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.010761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vn9tx" event={"ID":"ca6dcfa6-e957-44ad-95b8-c389f0907efc","Type":"ContainerDied","Data":"5bbf590cc702a450fde71d9c7693b4bef6e08c4b49901726b7da82058f0ce870"} Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.010821 4832 scope.go:117] "RemoveContainer" containerID="81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.036356 4832 scope.go:117] "RemoveContainer" containerID="fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.058396 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vn9tx"] Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.070350 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vn9tx"] Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.076212 4832 scope.go:117] "RemoveContainer" containerID="dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.110317 4832 scope.go:117] "RemoveContainer" containerID="81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9" Mar 12 15:30:15 crc kubenswrapper[4832]: E0312 15:30:15.110775 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9\": container with ID starting with 81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9 not found: ID does not exist" containerID="81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.110822 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9"} err="failed to get container status \"81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9\": rpc error: code = NotFound desc = could not find container \"81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9\": container with ID starting with 81d8ccd2e7925863620cd22a268d3aeff9fe0b20abc97fa1442ef25395240cf9 not found: ID does not exist" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.110850 4832 scope.go:117] "RemoveContainer" containerID="fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7" Mar 12 15:30:15 crc kubenswrapper[4832]: E0312 15:30:15.111404 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7\": container with ID starting with fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7 not found: ID does not exist" containerID="fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.111443 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7"} err="failed to get container status \"fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7\": rpc error: code = NotFound desc = could not find container \"fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7\": container with ID starting with fbb196dd2e7e27bc76c717d3c9b4a0739d9c45b9b7f7d06288d3bb8a0e0d54e7 not found: ID does not exist" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.111471 4832 scope.go:117] "RemoveContainer" containerID="dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e" Mar 12 15:30:15 crc kubenswrapper[4832]: E0312 15:30:15.112002 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e\": container with ID starting with dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e not found: ID does not exist" containerID="dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e" Mar 12 15:30:15 crc kubenswrapper[4832]: I0312 15:30:15.112028 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e"} err="failed to get container status \"dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e\": rpc error: code = NotFound desc = could not find container \"dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e\": container with ID starting with dbe80b4604af9f6c08055bdad64aea80b35b681812dc271ba018d07241cf3c7e not found: ID does not exist" Mar 12 15:30:16 crc kubenswrapper[4832]: I0312 15:30:16.662397 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" path="/var/lib/kubelet/pods/ca6dcfa6-e957-44ad-95b8-c389f0907efc/volumes" Mar 12 15:30:20 crc kubenswrapper[4832]: I0312 15:30:20.619536 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:30:20 crc kubenswrapper[4832]: E0312 15:30:20.620267 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:30:34 crc kubenswrapper[4832]: I0312 15:30:34.620666 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:30:35 crc kubenswrapper[4832]: I0312 15:30:35.218337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"b8dff5b7b35c730d242efb74081df9ddc7097bb83cf1387ad12def4b71de8e8a"} Mar 12 15:30:51 crc kubenswrapper[4832]: I0312 15:30:51.697825 4832 scope.go:117] "RemoveContainer" containerID="419df817519c7b2b9f1e806963b4f568797fe722156998155b51647be6b23668" Mar 12 15:30:51 crc kubenswrapper[4832]: I0312 15:30:51.726903 4832 scope.go:117] "RemoveContainer" containerID="8fa3577d6b5e9fefeda6ea21ecce6c689ec589f6fa1e0f34f81ad78c5f685d0a" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.165808 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555492-tt6hh"] Mar 12 15:32:00 crc kubenswrapper[4832]: E0312 15:32:00.166769 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="extract-content" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.166783 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="extract-content" Mar 12 15:32:00 crc kubenswrapper[4832]: E0312 15:32:00.166800 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11e42a4-3313-4196-951a-819b016cd002" containerName="collect-profiles" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.166807 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11e42a4-3313-4196-951a-819b016cd002" containerName="collect-profiles" Mar 12 15:32:00 crc kubenswrapper[4832]: E0312 15:32:00.166823 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="registry-server" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.166830 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="registry-server" Mar 12 15:32:00 crc kubenswrapper[4832]: E0312 15:32:00.166843 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="extract-utilities" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.166849 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="extract-utilities" Mar 12 15:32:00 crc kubenswrapper[4832]: E0312 15:32:00.166862 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d11ff7f-c51b-4f85-ae36-de90b4fef0e8" containerName="oc" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.166868 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d11ff7f-c51b-4f85-ae36-de90b4fef0e8" containerName="oc" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.167035 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d11ff7f-c51b-4f85-ae36-de90b4fef0e8" containerName="oc" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.167050 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca6dcfa6-e957-44ad-95b8-c389f0907efc" containerName="registry-server" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.167071 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11e42a4-3313-4196-951a-819b016cd002" containerName="collect-profiles" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.167681 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555492-tt6hh" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.170620 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.170916 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.171325 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.180620 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555492-tt6hh"] Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.226909 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2q75\" (UniqueName: \"kubernetes.io/projected/293116c3-d31a-499c-99ba-37e61728f952-kube-api-access-l2q75\") pod \"auto-csr-approver-29555492-tt6hh\" (UID: \"293116c3-d31a-499c-99ba-37e61728f952\") " pod="openshift-infra/auto-csr-approver-29555492-tt6hh" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.328974 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2q75\" (UniqueName: \"kubernetes.io/projected/293116c3-d31a-499c-99ba-37e61728f952-kube-api-access-l2q75\") pod \"auto-csr-approver-29555492-tt6hh\" (UID: \"293116c3-d31a-499c-99ba-37e61728f952\") " pod="openshift-infra/auto-csr-approver-29555492-tt6hh" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.369370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2q75\" (UniqueName: \"kubernetes.io/projected/293116c3-d31a-499c-99ba-37e61728f952-kube-api-access-l2q75\") pod \"auto-csr-approver-29555492-tt6hh\" (UID: \"293116c3-d31a-499c-99ba-37e61728f952\") " pod="openshift-infra/auto-csr-approver-29555492-tt6hh" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.501589 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555492-tt6hh" Mar 12 15:32:00 crc kubenswrapper[4832]: I0312 15:32:00.874971 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555492-tt6hh"] Mar 12 15:32:01 crc kubenswrapper[4832]: I0312 15:32:01.086776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555492-tt6hh" event={"ID":"293116c3-d31a-499c-99ba-37e61728f952","Type":"ContainerStarted","Data":"d157fa4969d26e2012bfa8c48281fc45297dbf7ee75cf03452d747f7b917211a"} Mar 12 15:32:03 crc kubenswrapper[4832]: I0312 15:32:03.104969 4832 generic.go:334] "Generic (PLEG): container finished" podID="293116c3-d31a-499c-99ba-37e61728f952" containerID="c9e6b9daf115a114143f64013427cbfcfc4c908c7d96215f2616ed9b50dbc8a1" exitCode=0 Mar 12 15:32:03 crc kubenswrapper[4832]: I0312 15:32:03.105099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555492-tt6hh" event={"ID":"293116c3-d31a-499c-99ba-37e61728f952","Type":"ContainerDied","Data":"c9e6b9daf115a114143f64013427cbfcfc4c908c7d96215f2616ed9b50dbc8a1"} Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.030876 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbgmv"] Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.033665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.073458 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbgmv"] Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.101335 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-utilities\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.101477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfcg\" (UniqueName: \"kubernetes.io/projected/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-kube-api-access-ndfcg\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.101640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-catalog-content\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.120118 4832 generic.go:334] "Generic (PLEG): container finished" podID="bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" containerID="229b60ec84f665c9371a0f6f3f1205bd57deb71b44a55c46c95e75df3dac45e7" exitCode=0 Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.120961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" event={"ID":"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e","Type":"ContainerDied","Data":"229b60ec84f665c9371a0f6f3f1205bd57deb71b44a55c46c95e75df3dac45e7"} Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.202732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-utilities\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.203128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfcg\" (UniqueName: \"kubernetes.io/projected/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-kube-api-access-ndfcg\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.203202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-catalog-content\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.203274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-utilities\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.203686 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-catalog-content\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.241178 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfcg\" (UniqueName: \"kubernetes.io/projected/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-kube-api-access-ndfcg\") pod \"redhat-operators-gbgmv\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.365586 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.535742 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555492-tt6hh" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.721587 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2q75\" (UniqueName: \"kubernetes.io/projected/293116c3-d31a-499c-99ba-37e61728f952-kube-api-access-l2q75\") pod \"293116c3-d31a-499c-99ba-37e61728f952\" (UID: \"293116c3-d31a-499c-99ba-37e61728f952\") " Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.728768 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/293116c3-d31a-499c-99ba-37e61728f952-kube-api-access-l2q75" (OuterVolumeSpecName: "kube-api-access-l2q75") pod "293116c3-d31a-499c-99ba-37e61728f952" (UID: "293116c3-d31a-499c-99ba-37e61728f952"). InnerVolumeSpecName "kube-api-access-l2q75". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.824328 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2q75\" (UniqueName: \"kubernetes.io/projected/293116c3-d31a-499c-99ba-37e61728f952-kube-api-access-l2q75\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:04 crc kubenswrapper[4832]: W0312 15:32:04.892160 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde69002f_c9a2_4ec9_b00f_e502a59e3e9e.slice/crio-f07b1a54d8b22cfaa80cfb7ff1643842eaef6efdf78fc02dfaa1132e8658f648 WatchSource:0}: Error finding container f07b1a54d8b22cfaa80cfb7ff1643842eaef6efdf78fc02dfaa1132e8658f648: Status 404 returned error can't find the container with id f07b1a54d8b22cfaa80cfb7ff1643842eaef6efdf78fc02dfaa1132e8658f648 Mar 12 15:32:04 crc kubenswrapper[4832]: I0312 15:32:04.895928 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbgmv"] Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.135119 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555492-tt6hh" event={"ID":"293116c3-d31a-499c-99ba-37e61728f952","Type":"ContainerDied","Data":"d157fa4969d26e2012bfa8c48281fc45297dbf7ee75cf03452d747f7b917211a"} Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.135474 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d157fa4969d26e2012bfa8c48281fc45297dbf7ee75cf03452d747f7b917211a" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.135144 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555492-tt6hh" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.137044 4832 generic.go:334] "Generic (PLEG): container finished" podID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerID="4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d" exitCode=0 Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.137091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbgmv" event={"ID":"de69002f-c9a2-4ec9-b00f-e502a59e3e9e","Type":"ContainerDied","Data":"4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d"} Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.137123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbgmv" event={"ID":"de69002f-c9a2-4ec9-b00f-e502a59e3e9e","Type":"ContainerStarted","Data":"f07b1a54d8b22cfaa80cfb7ff1643842eaef6efdf78fc02dfaa1132e8658f648"} Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.139942 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.638609 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.639608 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555486-q5846"] Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.649844 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555486-q5846"] Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.739643 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-1\") pod \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.739706 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx5dg\" (UniqueName: \"kubernetes.io/projected/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-kube-api-access-fx5dg\") pod \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.739747 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-0\") pod \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.739838 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-inventory\") pod \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.739928 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-telemetry-combined-ca-bundle\") pod \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.739960 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ssh-key-openstack-edpm-ipam\") pod \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.739992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-2\") pod \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\" (UID: \"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e\") " Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.748930 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" (UID: "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.773294 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-kube-api-access-fx5dg" (OuterVolumeSpecName: "kube-api-access-fx5dg") pod "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" (UID: "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e"). InnerVolumeSpecName "kube-api-access-fx5dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.781174 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" (UID: "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.783897 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" (UID: "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.802623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-inventory" (OuterVolumeSpecName: "inventory") pod "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" (UID: "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.803468 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" (UID: "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.814371 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" (UID: "bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.841619 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.841821 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx5dg\" (UniqueName: \"kubernetes.io/projected/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-kube-api-access-fx5dg\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.841831 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.841841 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.841852 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.841860 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:05 crc kubenswrapper[4832]: I0312 15:32:05.841869 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:06 crc kubenswrapper[4832]: I0312 15:32:06.145380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" event={"ID":"bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e","Type":"ContainerDied","Data":"80c07fc4580ea42c6daa66f74ce9964ed273d388d045fb5bc71dbbdac1328eeb"} Mar 12 15:32:06 crc kubenswrapper[4832]: I0312 15:32:06.145420 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c07fc4580ea42c6daa66f74ce9964ed273d388d045fb5bc71dbbdac1328eeb" Mar 12 15:32:06 crc kubenswrapper[4832]: I0312 15:32:06.145468 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ft68d" Mar 12 15:32:06 crc kubenswrapper[4832]: I0312 15:32:06.631866 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1433b9c5-e221-4842-8159-342b2359622f" path="/var/lib/kubelet/pods/1433b9c5-e221-4842-8159-342b2359622f/volumes" Mar 12 15:32:07 crc kubenswrapper[4832]: I0312 15:32:07.157262 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbgmv" event={"ID":"de69002f-c9a2-4ec9-b00f-e502a59e3e9e","Type":"ContainerStarted","Data":"daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976"} Mar 12 15:32:08 crc kubenswrapper[4832]: I0312 15:32:08.172070 4832 generic.go:334] "Generic (PLEG): container finished" podID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerID="daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976" exitCode=0 Mar 12 15:32:08 crc kubenswrapper[4832]: I0312 15:32:08.172144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbgmv" event={"ID":"de69002f-c9a2-4ec9-b00f-e502a59e3e9e","Type":"ContainerDied","Data":"daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976"} Mar 12 15:32:11 crc kubenswrapper[4832]: I0312 15:32:11.213169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbgmv" event={"ID":"de69002f-c9a2-4ec9-b00f-e502a59e3e9e","Type":"ContainerStarted","Data":"e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51"} Mar 12 15:32:11 crc kubenswrapper[4832]: I0312 15:32:11.245961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbgmv" podStartSLOduration=2.396316984 podStartE2EDuration="7.24593557s" podCreationTimestamp="2026-03-12 15:32:04 +0000 UTC" firstStartedPulling="2026-03-12 15:32:05.139676855 +0000 UTC m=+2683.783691081" lastFinishedPulling="2026-03-12 15:32:09.989295411 +0000 UTC m=+2688.633309667" observedRunningTime="2026-03-12 15:32:11.242129102 +0000 UTC m=+2689.886143348" watchObservedRunningTime="2026-03-12 15:32:11.24593557 +0000 UTC m=+2689.889949836" Mar 12 15:32:14 crc kubenswrapper[4832]: I0312 15:32:14.366272 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:14 crc kubenswrapper[4832]: I0312 15:32:14.366693 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:15 crc kubenswrapper[4832]: I0312 15:32:15.418196 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gbgmv" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="registry-server" probeResult="failure" output=< Mar 12 15:32:15 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Mar 12 15:32:15 crc kubenswrapper[4832]: > Mar 12 15:32:24 crc kubenswrapper[4832]: I0312 15:32:24.425330 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:24 crc kubenswrapper[4832]: I0312 15:32:24.491377 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:24 crc kubenswrapper[4832]: I0312 15:32:24.661627 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbgmv"] Mar 12 15:32:26 crc kubenswrapper[4832]: I0312 15:32:26.345343 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbgmv" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="registry-server" containerID="cri-o://e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51" gracePeriod=2 Mar 12 15:32:26 crc kubenswrapper[4832]: I0312 15:32:26.836985 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:26 crc kubenswrapper[4832]: I0312 15:32:26.976983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-catalog-content\") pod \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " Mar 12 15:32:26 crc kubenswrapper[4832]: I0312 15:32:26.977440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfcg\" (UniqueName: \"kubernetes.io/projected/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-kube-api-access-ndfcg\") pod \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " Mar 12 15:32:26 crc kubenswrapper[4832]: I0312 15:32:26.978283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-utilities\") pod \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\" (UID: \"de69002f-c9a2-4ec9-b00f-e502a59e3e9e\") " Mar 12 15:32:26 crc kubenswrapper[4832]: I0312 15:32:26.979600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-utilities" (OuterVolumeSpecName: "utilities") pod "de69002f-c9a2-4ec9-b00f-e502a59e3e9e" (UID: "de69002f-c9a2-4ec9-b00f-e502a59e3e9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:32:26 crc kubenswrapper[4832]: I0312 15:32:26.998655 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-kube-api-access-ndfcg" (OuterVolumeSpecName: "kube-api-access-ndfcg") pod "de69002f-c9a2-4ec9-b00f-e502a59e3e9e" (UID: "de69002f-c9a2-4ec9-b00f-e502a59e3e9e"). InnerVolumeSpecName "kube-api-access-ndfcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.080241 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.080276 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfcg\" (UniqueName: \"kubernetes.io/projected/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-kube-api-access-ndfcg\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.116216 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de69002f-c9a2-4ec9-b00f-e502a59e3e9e" (UID: "de69002f-c9a2-4ec9-b00f-e502a59e3e9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.180844 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de69002f-c9a2-4ec9-b00f-e502a59e3e9e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.354837 4832 generic.go:334] "Generic (PLEG): container finished" podID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerID="e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51" exitCode=0 Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.354887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbgmv" event={"ID":"de69002f-c9a2-4ec9-b00f-e502a59e3e9e","Type":"ContainerDied","Data":"e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51"} Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.354922 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbgmv" event={"ID":"de69002f-c9a2-4ec9-b00f-e502a59e3e9e","Type":"ContainerDied","Data":"f07b1a54d8b22cfaa80cfb7ff1643842eaef6efdf78fc02dfaa1132e8658f648"} Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.354942 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbgmv" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.354964 4832 scope.go:117] "RemoveContainer" containerID="e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.387959 4832 scope.go:117] "RemoveContainer" containerID="daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.438867 4832 scope.go:117] "RemoveContainer" containerID="4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.448732 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbgmv"] Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.463090 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbgmv"] Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.508408 4832 scope.go:117] "RemoveContainer" containerID="e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51" Mar 12 15:32:27 crc kubenswrapper[4832]: E0312 15:32:27.509331 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51\": container with ID starting with e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51 not found: ID does not exist" containerID="e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.509361 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51"} err="failed to get container status \"e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51\": rpc error: code = NotFound desc = could not find container \"e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51\": container with ID starting with e175d56ae3195e0a5aaacc4856ba0701462d7c66323220fb8326cbfa5974fc51 not found: ID does not exist" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.509386 4832 scope.go:117] "RemoveContainer" containerID="daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976" Mar 12 15:32:27 crc kubenswrapper[4832]: E0312 15:32:27.509868 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976\": container with ID starting with daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976 not found: ID does not exist" containerID="daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.509932 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976"} err="failed to get container status \"daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976\": rpc error: code = NotFound desc = could not find container \"daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976\": container with ID starting with daa5378d4e5cf0ab89343c20eb15f3872493c49520774713617fd92337f57976 not found: ID does not exist" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.509969 4832 scope.go:117] "RemoveContainer" containerID="4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d" Mar 12 15:32:27 crc kubenswrapper[4832]: E0312 15:32:27.510613 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d\": container with ID starting with 4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d not found: ID does not exist" containerID="4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d" Mar 12 15:32:27 crc kubenswrapper[4832]: I0312 15:32:27.510659 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d"} err="failed to get container status \"4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d\": rpc error: code = NotFound desc = could not find container \"4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d\": container with ID starting with 4714a55a8dbe336ce0fcf6e68b54083246c7d2ab8230e4de3110298e9165a49d not found: ID does not exist" Mar 12 15:32:28 crc kubenswrapper[4832]: I0312 15:32:28.629265 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" path="/var/lib/kubelet/pods/de69002f-c9a2-4ec9-b00f-e502a59e3e9e/volumes" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.688314 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 15:32:49 crc kubenswrapper[4832]: E0312 15:32:49.689192 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689212 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 15:32:49 crc kubenswrapper[4832]: E0312 15:32:49.689223 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="extract-utilities" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689231 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="extract-utilities" Mar 12 15:32:49 crc kubenswrapper[4832]: E0312 15:32:49.689253 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293116c3-d31a-499c-99ba-37e61728f952" containerName="oc" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689259 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="293116c3-d31a-499c-99ba-37e61728f952" containerName="oc" Mar 12 15:32:49 crc kubenswrapper[4832]: E0312 15:32:49.689268 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="extract-content" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689274 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="extract-content" Mar 12 15:32:49 crc kubenswrapper[4832]: E0312 15:32:49.689299 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="registry-server" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689305 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="registry-server" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689456 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="293116c3-d31a-499c-99ba-37e61728f952" containerName="oc" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689473 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.689484 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="de69002f-c9a2-4ec9-b00f-e502a59e3e9e" containerName="registry-server" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.690077 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.696493 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qwgpm" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.696717 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.697026 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.697607 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.708994 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.724098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-config-data\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.724170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.724211 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.825622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.825709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmcv\" (UniqueName: \"kubernetes.io/projected/5d12cc2d-980d-4992-ac59-1d874529ad70-kube-api-access-nkmcv\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.825783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.825937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.825991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-config-data\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.826103 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.826130 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.826215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.826321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.827359 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.827478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-config-data\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.833716 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928095 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928200 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928235 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmcv\" (UniqueName: \"kubernetes.io/projected/5d12cc2d-980d-4992-ac59-1d874529ad70-kube-api-access-nkmcv\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928530 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928728 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.928809 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.933089 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.939106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.944662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmcv\" (UniqueName: \"kubernetes.io/projected/5d12cc2d-980d-4992-ac59-1d874529ad70-kube-api-access-nkmcv\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:49 crc kubenswrapper[4832]: I0312 15:32:49.954301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " pod="openstack/tempest-tests-tempest" Mar 12 15:32:50 crc kubenswrapper[4832]: I0312 15:32:50.007193 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 15:32:50 crc kubenswrapper[4832]: I0312 15:32:50.455276 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 15:32:50 crc kubenswrapper[4832]: I0312 15:32:50.576851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d12cc2d-980d-4992-ac59-1d874529ad70","Type":"ContainerStarted","Data":"74172087e3ae04841d0f0e01a2935fcc0b9d2dfc8e06e1796ba167aba3fc1818"} Mar 12 15:32:51 crc kubenswrapper[4832]: I0312 15:32:51.899084 4832 scope.go:117] "RemoveContainer" containerID="629b1f8959958679c48cbf38f798cb0f860957f5363cef618aa5169e35fafaa5" Mar 12 15:32:56 crc kubenswrapper[4832]: I0312 15:32:56.314909 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:32:56 crc kubenswrapper[4832]: I0312 15:32:56.315374 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:33:18 crc kubenswrapper[4832]: E0312 15:33:18.334065 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 12 15:33:18 crc kubenswrapper[4832]: E0312 15:33:18.334980 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkmcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5d12cc2d-980d-4992-ac59-1d874529ad70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 15:33:18 crc kubenswrapper[4832]: E0312 15:33:18.336349 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5d12cc2d-980d-4992-ac59-1d874529ad70" Mar 12 15:33:18 crc kubenswrapper[4832]: E0312 15:33:18.864395 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5d12cc2d-980d-4992-ac59-1d874529ad70" Mar 12 15:33:26 crc kubenswrapper[4832]: I0312 15:33:26.314175 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:33:26 crc kubenswrapper[4832]: I0312 15:33:26.314762 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:33:33 crc kubenswrapper[4832]: I0312 15:33:33.152531 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 15:33:35 crc kubenswrapper[4832]: I0312 15:33:35.010164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d12cc2d-980d-4992-ac59-1d874529ad70","Type":"ContainerStarted","Data":"18e9f3223c4ba8c69a4697aa9770d4aa346caf86ec318f691dfd52d0c27a6ed5"} Mar 12 15:33:35 crc kubenswrapper[4832]: I0312 15:33:35.040052 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.3379471800000005 podStartE2EDuration="47.040030232s" podCreationTimestamp="2026-03-12 15:32:48 +0000 UTC" firstStartedPulling="2026-03-12 15:32:50.448378355 +0000 UTC m=+2729.092392581" lastFinishedPulling="2026-03-12 15:33:33.150461407 +0000 UTC m=+2771.794475633" observedRunningTime="2026-03-12 15:33:35.034859865 +0000 UTC m=+2773.678874091" watchObservedRunningTime="2026-03-12 15:33:35.040030232 +0000 UTC m=+2773.684044458" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.093965 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jdh2n"] Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.096671 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.102611 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdh2n"] Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.205148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-utilities\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.205412 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-catalog-content\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.205453 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69svn\" (UniqueName: \"kubernetes.io/projected/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-kube-api-access-69svn\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.306931 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-catalog-content\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.307003 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69svn\" (UniqueName: \"kubernetes.io/projected/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-kube-api-access-69svn\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.307062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-utilities\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.307738 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-utilities\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.308054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-catalog-content\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.334096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69svn\" (UniqueName: \"kubernetes.io/projected/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-kube-api-access-69svn\") pod \"certified-operators-jdh2n\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.432775 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:33:52 crc kubenswrapper[4832]: I0312 15:33:52.956251 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdh2n"] Mar 12 15:33:53 crc kubenswrapper[4832]: I0312 15:33:53.163924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdh2n" event={"ID":"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf","Type":"ContainerStarted","Data":"159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121"} Mar 12 15:33:53 crc kubenswrapper[4832]: I0312 15:33:53.164078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdh2n" event={"ID":"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf","Type":"ContainerStarted","Data":"50ac5b23647eed2331977ffefea7a2dcc4f8b23e15633162e9d1decd9f25a91b"} Mar 12 15:33:54 crc kubenswrapper[4832]: I0312 15:33:54.181439 4832 generic.go:334] "Generic (PLEG): container finished" podID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerID="159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121" exitCode=0 Mar 12 15:33:54 crc kubenswrapper[4832]: I0312 15:33:54.181497 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdh2n" event={"ID":"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf","Type":"ContainerDied","Data":"159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121"} Mar 12 15:33:56 crc kubenswrapper[4832]: I0312 15:33:56.221433 4832 generic.go:334] "Generic (PLEG): container finished" podID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerID="d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd" exitCode=0 Mar 12 15:33:56 crc kubenswrapper[4832]: I0312 15:33:56.221823 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdh2n" event={"ID":"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf","Type":"ContainerDied","Data":"d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd"} Mar 12 15:33:56 crc kubenswrapper[4832]: I0312 15:33:56.314148 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:33:56 crc kubenswrapper[4832]: I0312 15:33:56.314252 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:33:56 crc kubenswrapper[4832]: I0312 15:33:56.314328 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:33:56 crc kubenswrapper[4832]: I0312 15:33:56.315677 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8dff5b7b35c730d242efb74081df9ddc7097bb83cf1387ad12def4b71de8e8a"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:33:56 crc kubenswrapper[4832]: I0312 15:33:56.315796 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://b8dff5b7b35c730d242efb74081df9ddc7097bb83cf1387ad12def4b71de8e8a" gracePeriod=600 Mar 12 15:33:57 crc kubenswrapper[4832]: I0312 15:33:57.231765 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdh2n" event={"ID":"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf","Type":"ContainerStarted","Data":"6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa"} Mar 12 15:33:57 crc kubenswrapper[4832]: I0312 15:33:57.234588 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="b8dff5b7b35c730d242efb74081df9ddc7097bb83cf1387ad12def4b71de8e8a" exitCode=0 Mar 12 15:33:57 crc kubenswrapper[4832]: I0312 15:33:57.234626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"b8dff5b7b35c730d242efb74081df9ddc7097bb83cf1387ad12def4b71de8e8a"} Mar 12 15:33:57 crc kubenswrapper[4832]: I0312 15:33:57.234656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05"} Mar 12 15:33:57 crc kubenswrapper[4832]: I0312 15:33:57.234677 4832 scope.go:117] "RemoveContainer" containerID="13a5bb1395de7e4f458131a3fe02ebb9b7bd8ef53cd5930da5d4ee0fdeb23146" Mar 12 15:33:57 crc kubenswrapper[4832]: I0312 15:33:57.258346 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jdh2n" podStartSLOduration=2.733363181 podStartE2EDuration="5.258327683s" podCreationTimestamp="2026-03-12 15:33:52 +0000 UTC" firstStartedPulling="2026-03-12 15:33:54.183427843 +0000 UTC m=+2792.827442069" lastFinishedPulling="2026-03-12 15:33:56.708392325 +0000 UTC m=+2795.352406571" observedRunningTime="2026-03-12 15:33:57.250043567 +0000 UTC m=+2795.894057793" watchObservedRunningTime="2026-03-12 15:33:57.258327683 +0000 UTC m=+2795.902341909" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.152347 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555494-w59zw"] Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.154272 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555494-w59zw" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.157551 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.161095 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.161536 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.175966 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555494-w59zw"] Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.181747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9zbn\" (UniqueName: \"kubernetes.io/projected/fa30eaf4-ab42-43ab-906d-0bc5919aded5-kube-api-access-c9zbn\") pod \"auto-csr-approver-29555494-w59zw\" (UID: \"fa30eaf4-ab42-43ab-906d-0bc5919aded5\") " pod="openshift-infra/auto-csr-approver-29555494-w59zw" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.283591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9zbn\" (UniqueName: \"kubernetes.io/projected/fa30eaf4-ab42-43ab-906d-0bc5919aded5-kube-api-access-c9zbn\") pod \"auto-csr-approver-29555494-w59zw\" (UID: \"fa30eaf4-ab42-43ab-906d-0bc5919aded5\") " pod="openshift-infra/auto-csr-approver-29555494-w59zw" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.318408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9zbn\" (UniqueName: \"kubernetes.io/projected/fa30eaf4-ab42-43ab-906d-0bc5919aded5-kube-api-access-c9zbn\") pod \"auto-csr-approver-29555494-w59zw\" (UID: \"fa30eaf4-ab42-43ab-906d-0bc5919aded5\") " pod="openshift-infra/auto-csr-approver-29555494-w59zw" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.491960 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555494-w59zw" Mar 12 15:34:00 crc kubenswrapper[4832]: I0312 15:34:00.937580 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555494-w59zw"] Mar 12 15:34:01 crc kubenswrapper[4832]: I0312 15:34:01.277713 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555494-w59zw" event={"ID":"fa30eaf4-ab42-43ab-906d-0bc5919aded5","Type":"ContainerStarted","Data":"f84f392185305830f6ede8a728810fee480369f48a7e9b039cbfaf2ad9e10449"} Mar 12 15:34:02 crc kubenswrapper[4832]: I0312 15:34:02.288009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555494-w59zw" event={"ID":"fa30eaf4-ab42-43ab-906d-0bc5919aded5","Type":"ContainerStarted","Data":"0a4d7631bbcd9beb1c65d7d4e3f5645b0a4916448311991c445b9ba0a78a4188"} Mar 12 15:34:02 crc kubenswrapper[4832]: I0312 15:34:02.307414 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555494-w59zw" podStartSLOduration=1.471488773 podStartE2EDuration="2.307393469s" podCreationTimestamp="2026-03-12 15:34:00 +0000 UTC" firstStartedPulling="2026-03-12 15:34:00.952029724 +0000 UTC m=+2799.596043950" lastFinishedPulling="2026-03-12 15:34:01.78793438 +0000 UTC m=+2800.431948646" observedRunningTime="2026-03-12 15:34:02.304082985 +0000 UTC m=+2800.948097211" watchObservedRunningTime="2026-03-12 15:34:02.307393469 +0000 UTC m=+2800.951407685" Mar 12 15:34:02 crc kubenswrapper[4832]: I0312 15:34:02.433265 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:34:02 crc kubenswrapper[4832]: I0312 15:34:02.433683 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:34:02 crc kubenswrapper[4832]: I0312 15:34:02.472938 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:34:03 crc kubenswrapper[4832]: I0312 15:34:03.296488 4832 generic.go:334] "Generic (PLEG): container finished" podID="fa30eaf4-ab42-43ab-906d-0bc5919aded5" containerID="0a4d7631bbcd9beb1c65d7d4e3f5645b0a4916448311991c445b9ba0a78a4188" exitCode=0 Mar 12 15:34:03 crc kubenswrapper[4832]: I0312 15:34:03.296587 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555494-w59zw" event={"ID":"fa30eaf4-ab42-43ab-906d-0bc5919aded5","Type":"ContainerDied","Data":"0a4d7631bbcd9beb1c65d7d4e3f5645b0a4916448311991c445b9ba0a78a4188"} Mar 12 15:34:03 crc kubenswrapper[4832]: I0312 15:34:03.369468 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:34:03 crc kubenswrapper[4832]: I0312 15:34:03.427117 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdh2n"] Mar 12 15:34:04 crc kubenswrapper[4832]: I0312 15:34:04.739211 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555494-w59zw" Mar 12 15:34:04 crc kubenswrapper[4832]: I0312 15:34:04.794492 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9zbn\" (UniqueName: \"kubernetes.io/projected/fa30eaf4-ab42-43ab-906d-0bc5919aded5-kube-api-access-c9zbn\") pod \"fa30eaf4-ab42-43ab-906d-0bc5919aded5\" (UID: \"fa30eaf4-ab42-43ab-906d-0bc5919aded5\") " Mar 12 15:34:04 crc kubenswrapper[4832]: I0312 15:34:04.802729 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa30eaf4-ab42-43ab-906d-0bc5919aded5-kube-api-access-c9zbn" (OuterVolumeSpecName: "kube-api-access-c9zbn") pod "fa30eaf4-ab42-43ab-906d-0bc5919aded5" (UID: "fa30eaf4-ab42-43ab-906d-0bc5919aded5"). InnerVolumeSpecName "kube-api-access-c9zbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:34:04 crc kubenswrapper[4832]: I0312 15:34:04.896778 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9zbn\" (UniqueName: \"kubernetes.io/projected/fa30eaf4-ab42-43ab-906d-0bc5919aded5-kube-api-access-c9zbn\") on node \"crc\" DevicePath \"\"" Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.317283 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555494-w59zw" event={"ID":"fa30eaf4-ab42-43ab-906d-0bc5919aded5","Type":"ContainerDied","Data":"f84f392185305830f6ede8a728810fee480369f48a7e9b039cbfaf2ad9e10449"} Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.317327 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84f392185305830f6ede8a728810fee480369f48a7e9b039cbfaf2ad9e10449" Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.317356 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555494-w59zw" Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.317430 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jdh2n" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="registry-server" containerID="cri-o://6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa" gracePeriod=2 Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.375296 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555488-2pzg4"] Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.384762 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555488-2pzg4"] Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.817695 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.914726 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-catalog-content\") pod \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.914852 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-utilities\") pod \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.914931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69svn\" (UniqueName: \"kubernetes.io/projected/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-kube-api-access-69svn\") pod \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\" (UID: \"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf\") " Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.915560 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-utilities" (OuterVolumeSpecName: "utilities") pod "602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" (UID: "602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:34:05 crc kubenswrapper[4832]: I0312 15:34:05.941862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-kube-api-access-69svn" (OuterVolumeSpecName: "kube-api-access-69svn") pod "602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" (UID: "602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf"). InnerVolumeSpecName "kube-api-access-69svn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.001817 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" (UID: "602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.017391 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.017437 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69svn\" (UniqueName: \"kubernetes.io/projected/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-kube-api-access-69svn\") on node \"crc\" DevicePath \"\"" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.017451 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.328488 4832 generic.go:334] "Generic (PLEG): container finished" podID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerID="6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa" exitCode=0 Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.328549 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdh2n" event={"ID":"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf","Type":"ContainerDied","Data":"6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa"} Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.328601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdh2n" event={"ID":"602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf","Type":"ContainerDied","Data":"50ac5b23647eed2331977ffefea7a2dcc4f8b23e15633162e9d1decd9f25a91b"} Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.328621 4832 scope.go:117] "RemoveContainer" containerID="6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.328615 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdh2n" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.348217 4832 scope.go:117] "RemoveContainer" containerID="d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.379298 4832 scope.go:117] "RemoveContainer" containerID="159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.441360 4832 scope.go:117] "RemoveContainer" containerID="6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa" Mar 12 15:34:06 crc kubenswrapper[4832]: E0312 15:34:06.441950 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa\": container with ID starting with 6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa not found: ID does not exist" containerID="6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.442005 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa"} err="failed to get container status \"6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa\": rpc error: code = NotFound desc = could not find container \"6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa\": container with ID starting with 6fd9f4f038041f48812960cf604d4c0bf80a24e66383e49d8940287c30a821fa not found: ID does not exist" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.442040 4832 scope.go:117] "RemoveContainer" containerID="d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd" Mar 12 15:34:06 crc kubenswrapper[4832]: E0312 15:34:06.442385 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd\": container with ID starting with d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd not found: ID does not exist" containerID="d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.442428 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd"} err="failed to get container status \"d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd\": rpc error: code = NotFound desc = could not find container \"d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd\": container with ID starting with d9b4483ba70dc29b5f1502771251be686de0cb0cdafd1b425b5347614662e0fd not found: ID does not exist" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.442462 4832 scope.go:117] "RemoveContainer" containerID="159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121" Mar 12 15:34:06 crc kubenswrapper[4832]: E0312 15:34:06.442750 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121\": container with ID starting with 159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121 not found: ID does not exist" containerID="159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.442774 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121"} err="failed to get container status \"159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121\": rpc error: code = NotFound desc = could not find container \"159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121\": container with ID starting with 159ed0f8e4e771257b3f5f591f56269317ebaacee496e4833e9f9e43e1aa8121 not found: ID does not exist" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.450586 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdh2n"] Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.462869 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jdh2n"] Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.629744 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" path="/var/lib/kubelet/pods/602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf/volumes" Mar 12 15:34:06 crc kubenswrapper[4832]: I0312 15:34:06.630792 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29c2304-219e-4871-8dd0-232c9eaa6500" path="/var/lib/kubelet/pods/c29c2304-219e-4871-8dd0-232c9eaa6500/volumes" Mar 12 15:34:52 crc kubenswrapper[4832]: I0312 15:34:52.035877 4832 scope.go:117] "RemoveContainer" containerID="896580ed7a64ef1366a062f8012304755c7aba364cc126d03e8f5bbd3015bd45" Mar 12 15:35:56 crc kubenswrapper[4832]: I0312 15:35:56.314430 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:35:56 crc kubenswrapper[4832]: I0312 15:35:56.315075 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.141908 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555496-vptrr"] Mar 12 15:36:00 crc kubenswrapper[4832]: E0312 15:36:00.142877 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="extract-content" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.142895 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="extract-content" Mar 12 15:36:00 crc kubenswrapper[4832]: E0312 15:36:00.142921 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="registry-server" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.142929 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="registry-server" Mar 12 15:36:00 crc kubenswrapper[4832]: E0312 15:36:00.142942 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="extract-utilities" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.142950 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="extract-utilities" Mar 12 15:36:00 crc kubenswrapper[4832]: E0312 15:36:00.142960 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa30eaf4-ab42-43ab-906d-0bc5919aded5" containerName="oc" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.142967 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa30eaf4-ab42-43ab-906d-0bc5919aded5" containerName="oc" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.143270 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="602f1bb6-60f1-4f4f-bb4f-1f1eecd4e8bf" containerName="registry-server" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.143290 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa30eaf4-ab42-43ab-906d-0bc5919aded5" containerName="oc" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.144021 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555496-vptrr" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.146316 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.146672 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.146675 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.151389 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555496-vptrr"] Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.301603 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h7jc\" (UniqueName: \"kubernetes.io/projected/e231000f-90b7-4df8-bfc1-59097f15209f-kube-api-access-2h7jc\") pod \"auto-csr-approver-29555496-vptrr\" (UID: \"e231000f-90b7-4df8-bfc1-59097f15209f\") " pod="openshift-infra/auto-csr-approver-29555496-vptrr" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.403315 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h7jc\" (UniqueName: \"kubernetes.io/projected/e231000f-90b7-4df8-bfc1-59097f15209f-kube-api-access-2h7jc\") pod \"auto-csr-approver-29555496-vptrr\" (UID: \"e231000f-90b7-4df8-bfc1-59097f15209f\") " pod="openshift-infra/auto-csr-approver-29555496-vptrr" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.422297 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h7jc\" (UniqueName: \"kubernetes.io/projected/e231000f-90b7-4df8-bfc1-59097f15209f-kube-api-access-2h7jc\") pod \"auto-csr-approver-29555496-vptrr\" (UID: \"e231000f-90b7-4df8-bfc1-59097f15209f\") " pod="openshift-infra/auto-csr-approver-29555496-vptrr" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.471925 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555496-vptrr" Mar 12 15:36:00 crc kubenswrapper[4832]: I0312 15:36:00.923598 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555496-vptrr"] Mar 12 15:36:01 crc kubenswrapper[4832]: I0312 15:36:01.568112 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555496-vptrr" event={"ID":"e231000f-90b7-4df8-bfc1-59097f15209f","Type":"ContainerStarted","Data":"91179d19ae4a90f2d44fb75e568bea94e32016b0b74aa6ce98bfcf76c417f562"} Mar 12 15:36:02 crc kubenswrapper[4832]: I0312 15:36:02.582371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555496-vptrr" event={"ID":"e231000f-90b7-4df8-bfc1-59097f15209f","Type":"ContainerStarted","Data":"0b354a3da76f04fe9edf61549275c2d4a2640359062ded8842ae06bcf8ce2e14"} Mar 12 15:36:02 crc kubenswrapper[4832]: I0312 15:36:02.609305 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555496-vptrr" podStartSLOduration=1.412228 podStartE2EDuration="2.60928175s" podCreationTimestamp="2026-03-12 15:36:00 +0000 UTC" firstStartedPulling="2026-03-12 15:36:00.925323082 +0000 UTC m=+2919.569337308" lastFinishedPulling="2026-03-12 15:36:02.122376822 +0000 UTC m=+2920.766391058" observedRunningTime="2026-03-12 15:36:02.595794225 +0000 UTC m=+2921.239808461" watchObservedRunningTime="2026-03-12 15:36:02.60928175 +0000 UTC m=+2921.253295976" Mar 12 15:36:03 crc kubenswrapper[4832]: I0312 15:36:03.591036 4832 generic.go:334] "Generic (PLEG): container finished" podID="e231000f-90b7-4df8-bfc1-59097f15209f" containerID="0b354a3da76f04fe9edf61549275c2d4a2640359062ded8842ae06bcf8ce2e14" exitCode=0 Mar 12 15:36:03 crc kubenswrapper[4832]: I0312 15:36:03.591085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555496-vptrr" event={"ID":"e231000f-90b7-4df8-bfc1-59097f15209f","Type":"ContainerDied","Data":"0b354a3da76f04fe9edf61549275c2d4a2640359062ded8842ae06bcf8ce2e14"} Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.042060 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555496-vptrr" Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.194801 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h7jc\" (UniqueName: \"kubernetes.io/projected/e231000f-90b7-4df8-bfc1-59097f15209f-kube-api-access-2h7jc\") pod \"e231000f-90b7-4df8-bfc1-59097f15209f\" (UID: \"e231000f-90b7-4df8-bfc1-59097f15209f\") " Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.202119 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e231000f-90b7-4df8-bfc1-59097f15209f-kube-api-access-2h7jc" (OuterVolumeSpecName: "kube-api-access-2h7jc") pod "e231000f-90b7-4df8-bfc1-59097f15209f" (UID: "e231000f-90b7-4df8-bfc1-59097f15209f"). InnerVolumeSpecName "kube-api-access-2h7jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.297688 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h7jc\" (UniqueName: \"kubernetes.io/projected/e231000f-90b7-4df8-bfc1-59097f15209f-kube-api-access-2h7jc\") on node \"crc\" DevicePath \"\"" Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.617385 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555496-vptrr" event={"ID":"e231000f-90b7-4df8-bfc1-59097f15209f","Type":"ContainerDied","Data":"91179d19ae4a90f2d44fb75e568bea94e32016b0b74aa6ce98bfcf76c417f562"} Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.617699 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91179d19ae4a90f2d44fb75e568bea94e32016b0b74aa6ce98bfcf76c417f562" Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.617466 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555496-vptrr" Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.689117 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555490-kqvl8"] Mar 12 15:36:05 crc kubenswrapper[4832]: I0312 15:36:05.696619 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555490-kqvl8"] Mar 12 15:36:06 crc kubenswrapper[4832]: I0312 15:36:06.638119 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d11ff7f-c51b-4f85-ae36-de90b4fef0e8" path="/var/lib/kubelet/pods/5d11ff7f-c51b-4f85-ae36-de90b4fef0e8/volumes" Mar 12 15:36:26 crc kubenswrapper[4832]: I0312 15:36:26.314091 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:36:26 crc kubenswrapper[4832]: I0312 15:36:26.314716 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:36:52 crc kubenswrapper[4832]: I0312 15:36:52.149280 4832 scope.go:117] "RemoveContainer" containerID="df0ca1d8216c15d08ae44c1abc5414e6e28c9bcbc347057d2e674a9c7e596a65" Mar 12 15:36:56 crc kubenswrapper[4832]: I0312 15:36:56.314772 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:36:56 crc kubenswrapper[4832]: I0312 15:36:56.315324 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:36:56 crc kubenswrapper[4832]: I0312 15:36:56.315384 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:36:56 crc kubenswrapper[4832]: I0312 15:36:56.316211 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:36:56 crc kubenswrapper[4832]: I0312 15:36:56.316283 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" gracePeriod=600 Mar 12 15:36:56 crc kubenswrapper[4832]: E0312 15:36:56.445401 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:36:57 crc kubenswrapper[4832]: I0312 15:36:57.132121 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" exitCode=0 Mar 12 15:36:57 crc kubenswrapper[4832]: I0312 15:36:57.132191 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05"} Mar 12 15:36:57 crc kubenswrapper[4832]: I0312 15:36:57.132245 4832 scope.go:117] "RemoveContainer" containerID="b8dff5b7b35c730d242efb74081df9ddc7097bb83cf1387ad12def4b71de8e8a" Mar 12 15:36:57 crc kubenswrapper[4832]: I0312 15:36:57.132964 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:36:57 crc kubenswrapper[4832]: E0312 15:36:57.133310 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.515985 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t5hlp"] Mar 12 15:37:08 crc kubenswrapper[4832]: E0312 15:37:08.519145 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e231000f-90b7-4df8-bfc1-59097f15209f" containerName="oc" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.519170 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e231000f-90b7-4df8-bfc1-59097f15209f" containerName="oc" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.519358 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e231000f-90b7-4df8-bfc1-59097f15209f" containerName="oc" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.520638 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.537910 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5hlp"] Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.545077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-catalog-content\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.545355 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gz7h\" (UniqueName: \"kubernetes.io/projected/579f64f1-d6a0-49a4-9359-5f4eb78ee629-kube-api-access-6gz7h\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.545472 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-utilities\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.647713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-catalog-content\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.648188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-catalog-content\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.649255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gz7h\" (UniqueName: \"kubernetes.io/projected/579f64f1-d6a0-49a4-9359-5f4eb78ee629-kube-api-access-6gz7h\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.649335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-utilities\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.649837 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-utilities\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.668742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gz7h\" (UniqueName: \"kubernetes.io/projected/579f64f1-d6a0-49a4-9359-5f4eb78ee629-kube-api-access-6gz7h\") pod \"redhat-marketplace-t5hlp\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:08 crc kubenswrapper[4832]: I0312 15:37:08.855594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:09 crc kubenswrapper[4832]: I0312 15:37:09.343874 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5hlp"] Mar 12 15:37:10 crc kubenswrapper[4832]: I0312 15:37:10.272861 4832 generic.go:334] "Generic (PLEG): container finished" podID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerID="7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8" exitCode=0 Mar 12 15:37:10 crc kubenswrapper[4832]: I0312 15:37:10.272930 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5hlp" event={"ID":"579f64f1-d6a0-49a4-9359-5f4eb78ee629","Type":"ContainerDied","Data":"7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8"} Mar 12 15:37:10 crc kubenswrapper[4832]: I0312 15:37:10.272988 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5hlp" event={"ID":"579f64f1-d6a0-49a4-9359-5f4eb78ee629","Type":"ContainerStarted","Data":"efcbf4ff25841ae21927907143b13da39ddd25f806a24b516a18db06f131416f"} Mar 12 15:37:10 crc kubenswrapper[4832]: I0312 15:37:10.274851 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:37:10 crc kubenswrapper[4832]: I0312 15:37:10.620662 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:37:10 crc kubenswrapper[4832]: E0312 15:37:10.621458 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:37:11 crc kubenswrapper[4832]: I0312 15:37:11.283893 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5hlp" event={"ID":"579f64f1-d6a0-49a4-9359-5f4eb78ee629","Type":"ContainerStarted","Data":"77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101"} Mar 12 15:37:12 crc kubenswrapper[4832]: I0312 15:37:12.297152 4832 generic.go:334] "Generic (PLEG): container finished" podID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerID="77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101" exitCode=0 Mar 12 15:37:12 crc kubenswrapper[4832]: I0312 15:37:12.297302 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5hlp" event={"ID":"579f64f1-d6a0-49a4-9359-5f4eb78ee629","Type":"ContainerDied","Data":"77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101"} Mar 12 15:37:13 crc kubenswrapper[4832]: I0312 15:37:13.312423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5hlp" event={"ID":"579f64f1-d6a0-49a4-9359-5f4eb78ee629","Type":"ContainerStarted","Data":"f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a"} Mar 12 15:37:13 crc kubenswrapper[4832]: I0312 15:37:13.352735 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t5hlp" podStartSLOduration=2.829974469 podStartE2EDuration="5.35271026s" podCreationTimestamp="2026-03-12 15:37:08 +0000 UTC" firstStartedPulling="2026-03-12 15:37:10.274465632 +0000 UTC m=+2988.918479868" lastFinishedPulling="2026-03-12 15:37:12.797201433 +0000 UTC m=+2991.441215659" observedRunningTime="2026-03-12 15:37:13.343663992 +0000 UTC m=+2991.987678228" watchObservedRunningTime="2026-03-12 15:37:13.35271026 +0000 UTC m=+2991.996724496" Mar 12 15:37:18 crc kubenswrapper[4832]: I0312 15:37:18.856133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:18 crc kubenswrapper[4832]: I0312 15:37:18.857791 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:18 crc kubenswrapper[4832]: I0312 15:37:18.922490 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:19 crc kubenswrapper[4832]: I0312 15:37:19.428009 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:19 crc kubenswrapper[4832]: I0312 15:37:19.477542 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5hlp"] Mar 12 15:37:21 crc kubenswrapper[4832]: I0312 15:37:21.388076 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t5hlp" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="registry-server" containerID="cri-o://f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a" gracePeriod=2 Mar 12 15:37:21 crc kubenswrapper[4832]: I0312 15:37:21.963396 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.035647 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-utilities\") pod \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.035758 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-catalog-content\") pod \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.035887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gz7h\" (UniqueName: \"kubernetes.io/projected/579f64f1-d6a0-49a4-9359-5f4eb78ee629-kube-api-access-6gz7h\") pod \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\" (UID: \"579f64f1-d6a0-49a4-9359-5f4eb78ee629\") " Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.039034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-utilities" (OuterVolumeSpecName: "utilities") pod "579f64f1-d6a0-49a4-9359-5f4eb78ee629" (UID: "579f64f1-d6a0-49a4-9359-5f4eb78ee629"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.048411 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579f64f1-d6a0-49a4-9359-5f4eb78ee629-kube-api-access-6gz7h" (OuterVolumeSpecName: "kube-api-access-6gz7h") pod "579f64f1-d6a0-49a4-9359-5f4eb78ee629" (UID: "579f64f1-d6a0-49a4-9359-5f4eb78ee629"). InnerVolumeSpecName "kube-api-access-6gz7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.063896 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "579f64f1-d6a0-49a4-9359-5f4eb78ee629" (UID: "579f64f1-d6a0-49a4-9359-5f4eb78ee629"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.137354 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gz7h\" (UniqueName: \"kubernetes.io/projected/579f64f1-d6a0-49a4-9359-5f4eb78ee629-kube-api-access-6gz7h\") on node \"crc\" DevicePath \"\"" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.137721 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.137736 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f64f1-d6a0-49a4-9359-5f4eb78ee629-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.402919 4832 generic.go:334] "Generic (PLEG): container finished" podID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerID="f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a" exitCode=0 Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.402978 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5hlp" event={"ID":"579f64f1-d6a0-49a4-9359-5f4eb78ee629","Type":"ContainerDied","Data":"f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a"} Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.403019 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5hlp" event={"ID":"579f64f1-d6a0-49a4-9359-5f4eb78ee629","Type":"ContainerDied","Data":"efcbf4ff25841ae21927907143b13da39ddd25f806a24b516a18db06f131416f"} Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.403051 4832 scope.go:117] "RemoveContainer" containerID="f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.403257 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5hlp" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.440111 4832 scope.go:117] "RemoveContainer" containerID="77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.466146 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5hlp"] Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.478104 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5hlp"] Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.491421 4832 scope.go:117] "RemoveContainer" containerID="7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.517288 4832 scope.go:117] "RemoveContainer" containerID="f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a" Mar 12 15:37:22 crc kubenswrapper[4832]: E0312 15:37:22.517920 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a\": container with ID starting with f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a not found: ID does not exist" containerID="f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.517968 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a"} err="failed to get container status \"f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a\": rpc error: code = NotFound desc = could not find container \"f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a\": container with ID starting with f07216c5d271b95c6846046839585728840d3d9c693294ebbea36832f39d987a not found: ID does not exist" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.517997 4832 scope.go:117] "RemoveContainer" containerID="77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101" Mar 12 15:37:22 crc kubenswrapper[4832]: E0312 15:37:22.518351 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101\": container with ID starting with 77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101 not found: ID does not exist" containerID="77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.518385 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101"} err="failed to get container status \"77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101\": rpc error: code = NotFound desc = could not find container \"77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101\": container with ID starting with 77aaa9583c503332119cad4cf8ebfffe2428a35eaa5619cabea5ebfab10ec101 not found: ID does not exist" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.518414 4832 scope.go:117] "RemoveContainer" containerID="7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8" Mar 12 15:37:22 crc kubenswrapper[4832]: E0312 15:37:22.518733 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8\": container with ID starting with 7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8 not found: ID does not exist" containerID="7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.518757 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8"} err="failed to get container status \"7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8\": rpc error: code = NotFound desc = could not find container \"7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8\": container with ID starting with 7aa7fb5b238876a4eb9bf82a2b17af61ddcbc63061ab35618390339573540fc8 not found: ID does not exist" Mar 12 15:37:22 crc kubenswrapper[4832]: I0312 15:37:22.631940 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" path="/var/lib/kubelet/pods/579f64f1-d6a0-49a4-9359-5f4eb78ee629/volumes" Mar 12 15:37:25 crc kubenswrapper[4832]: I0312 15:37:25.620663 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:37:25 crc kubenswrapper[4832]: E0312 15:37:25.621457 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:37:40 crc kubenswrapper[4832]: I0312 15:37:40.620140 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:37:40 crc kubenswrapper[4832]: E0312 15:37:40.621061 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:37:52 crc kubenswrapper[4832]: I0312 15:37:52.627952 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:37:52 crc kubenswrapper[4832]: E0312 15:37:52.628855 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.140339 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555498-z2ptk"] Mar 12 15:38:00 crc kubenswrapper[4832]: E0312 15:38:00.141194 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="registry-server" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.141205 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="registry-server" Mar 12 15:38:00 crc kubenswrapper[4832]: E0312 15:38:00.141219 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="extract-utilities" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.141224 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="extract-utilities" Mar 12 15:38:00 crc kubenswrapper[4832]: E0312 15:38:00.141254 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="extract-content" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.141260 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="extract-content" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.141437 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="579f64f1-d6a0-49a4-9359-5f4eb78ee629" containerName="registry-server" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.142165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555498-z2ptk" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.144202 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.144271 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.144986 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.148960 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555498-z2ptk"] Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.297831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6sn\" (UniqueName: \"kubernetes.io/projected/559b4600-c134-4a7a-ad81-853daec70098-kube-api-access-zc6sn\") pod \"auto-csr-approver-29555498-z2ptk\" (UID: \"559b4600-c134-4a7a-ad81-853daec70098\") " pod="openshift-infra/auto-csr-approver-29555498-z2ptk" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.399816 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6sn\" (UniqueName: \"kubernetes.io/projected/559b4600-c134-4a7a-ad81-853daec70098-kube-api-access-zc6sn\") pod \"auto-csr-approver-29555498-z2ptk\" (UID: \"559b4600-c134-4a7a-ad81-853daec70098\") " pod="openshift-infra/auto-csr-approver-29555498-z2ptk" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.419556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6sn\" (UniqueName: \"kubernetes.io/projected/559b4600-c134-4a7a-ad81-853daec70098-kube-api-access-zc6sn\") pod \"auto-csr-approver-29555498-z2ptk\" (UID: \"559b4600-c134-4a7a-ad81-853daec70098\") " pod="openshift-infra/auto-csr-approver-29555498-z2ptk" Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.462845 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555498-z2ptk" Mar 12 15:38:00 crc kubenswrapper[4832]: W0312 15:38:00.974280 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod559b4600_c134_4a7a_ad81_853daec70098.slice/crio-a8c74156c3fc25a73e741363d097b17231c70b584ce91e7b1f7e1da1ad163841 WatchSource:0}: Error finding container a8c74156c3fc25a73e741363d097b17231c70b584ce91e7b1f7e1da1ad163841: Status 404 returned error can't find the container with id a8c74156c3fc25a73e741363d097b17231c70b584ce91e7b1f7e1da1ad163841 Mar 12 15:38:00 crc kubenswrapper[4832]: I0312 15:38:00.979566 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555498-z2ptk"] Mar 12 15:38:01 crc kubenswrapper[4832]: I0312 15:38:01.818066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555498-z2ptk" event={"ID":"559b4600-c134-4a7a-ad81-853daec70098","Type":"ContainerStarted","Data":"a8c74156c3fc25a73e741363d097b17231c70b584ce91e7b1f7e1da1ad163841"} Mar 12 15:38:02 crc kubenswrapper[4832]: I0312 15:38:02.833019 4832 generic.go:334] "Generic (PLEG): container finished" podID="559b4600-c134-4a7a-ad81-853daec70098" containerID="495ec2dfe1f876146886813b7a40cad2483a8df79f062d62feedf1efdc975bf7" exitCode=0 Mar 12 15:38:02 crc kubenswrapper[4832]: I0312 15:38:02.833263 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555498-z2ptk" event={"ID":"559b4600-c134-4a7a-ad81-853daec70098","Type":"ContainerDied","Data":"495ec2dfe1f876146886813b7a40cad2483a8df79f062d62feedf1efdc975bf7"} Mar 12 15:38:04 crc kubenswrapper[4832]: I0312 15:38:04.282810 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555498-z2ptk" Mar 12 15:38:04 crc kubenswrapper[4832]: I0312 15:38:04.380252 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc6sn\" (UniqueName: \"kubernetes.io/projected/559b4600-c134-4a7a-ad81-853daec70098-kube-api-access-zc6sn\") pod \"559b4600-c134-4a7a-ad81-853daec70098\" (UID: \"559b4600-c134-4a7a-ad81-853daec70098\") " Mar 12 15:38:04 crc kubenswrapper[4832]: I0312 15:38:04.385362 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559b4600-c134-4a7a-ad81-853daec70098-kube-api-access-zc6sn" (OuterVolumeSpecName: "kube-api-access-zc6sn") pod "559b4600-c134-4a7a-ad81-853daec70098" (UID: "559b4600-c134-4a7a-ad81-853daec70098"). InnerVolumeSpecName "kube-api-access-zc6sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:38:04 crc kubenswrapper[4832]: I0312 15:38:04.482673 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc6sn\" (UniqueName: \"kubernetes.io/projected/559b4600-c134-4a7a-ad81-853daec70098-kube-api-access-zc6sn\") on node \"crc\" DevicePath \"\"" Mar 12 15:38:04 crc kubenswrapper[4832]: I0312 15:38:04.856878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555498-z2ptk" event={"ID":"559b4600-c134-4a7a-ad81-853daec70098","Type":"ContainerDied","Data":"a8c74156c3fc25a73e741363d097b17231c70b584ce91e7b1f7e1da1ad163841"} Mar 12 15:38:04 crc kubenswrapper[4832]: I0312 15:38:04.856936 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c74156c3fc25a73e741363d097b17231c70b584ce91e7b1f7e1da1ad163841" Mar 12 15:38:04 crc kubenswrapper[4832]: I0312 15:38:04.857005 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555498-z2ptk" Mar 12 15:38:05 crc kubenswrapper[4832]: I0312 15:38:05.362201 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555492-tt6hh"] Mar 12 15:38:05 crc kubenswrapper[4832]: I0312 15:38:05.374757 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555492-tt6hh"] Mar 12 15:38:06 crc kubenswrapper[4832]: I0312 15:38:06.642041 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="293116c3-d31a-499c-99ba-37e61728f952" path="/var/lib/kubelet/pods/293116c3-d31a-499c-99ba-37e61728f952/volumes" Mar 12 15:38:07 crc kubenswrapper[4832]: I0312 15:38:07.620154 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:38:07 crc kubenswrapper[4832]: E0312 15:38:07.620494 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:38:20 crc kubenswrapper[4832]: I0312 15:38:20.620415 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:38:20 crc kubenswrapper[4832]: E0312 15:38:20.621172 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:38:32 crc kubenswrapper[4832]: I0312 15:38:32.634442 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:38:32 crc kubenswrapper[4832]: E0312 15:38:32.635323 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:38:45 crc kubenswrapper[4832]: I0312 15:38:45.620568 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:38:45 crc kubenswrapper[4832]: E0312 15:38:45.621717 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:38:52 crc kubenswrapper[4832]: I0312 15:38:52.285217 4832 scope.go:117] "RemoveContainer" containerID="c9e6b9daf115a114143f64013427cbfcfc4c908c7d96215f2616ed9b50dbc8a1" Mar 12 15:38:56 crc kubenswrapper[4832]: I0312 15:38:56.619705 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:38:56 crc kubenswrapper[4832]: E0312 15:38:56.620483 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:39:07 crc kubenswrapper[4832]: I0312 15:39:07.619402 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:39:07 crc kubenswrapper[4832]: E0312 15:39:07.620416 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:39:21 crc kubenswrapper[4832]: I0312 15:39:21.619798 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:39:21 crc kubenswrapper[4832]: E0312 15:39:21.620653 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:39:36 crc kubenswrapper[4832]: I0312 15:39:36.620128 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:39:36 crc kubenswrapper[4832]: E0312 15:39:36.620938 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:39:51 crc kubenswrapper[4832]: I0312 15:39:51.619698 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:39:51 crc kubenswrapper[4832]: E0312 15:39:51.620436 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.155105 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555500-9xmr5"] Mar 12 15:40:00 crc kubenswrapper[4832]: E0312 15:40:00.156122 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559b4600-c134-4a7a-ad81-853daec70098" containerName="oc" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.156138 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="559b4600-c134-4a7a-ad81-853daec70098" containerName="oc" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.156393 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="559b4600-c134-4a7a-ad81-853daec70098" containerName="oc" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.157125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555500-9xmr5" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.159626 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.160269 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.160749 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.178598 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555500-9xmr5"] Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.234210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc4dp\" (UniqueName: \"kubernetes.io/projected/4ebf363f-19b6-4ef3-bab3-231d3929f7fb-kube-api-access-zc4dp\") pod \"auto-csr-approver-29555500-9xmr5\" (UID: \"4ebf363f-19b6-4ef3-bab3-231d3929f7fb\") " pod="openshift-infra/auto-csr-approver-29555500-9xmr5" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.335882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc4dp\" (UniqueName: \"kubernetes.io/projected/4ebf363f-19b6-4ef3-bab3-231d3929f7fb-kube-api-access-zc4dp\") pod \"auto-csr-approver-29555500-9xmr5\" (UID: \"4ebf363f-19b6-4ef3-bab3-231d3929f7fb\") " pod="openshift-infra/auto-csr-approver-29555500-9xmr5" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.362832 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc4dp\" (UniqueName: \"kubernetes.io/projected/4ebf363f-19b6-4ef3-bab3-231d3929f7fb-kube-api-access-zc4dp\") pod \"auto-csr-approver-29555500-9xmr5\" (UID: \"4ebf363f-19b6-4ef3-bab3-231d3929f7fb\") " pod="openshift-infra/auto-csr-approver-29555500-9xmr5" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.479546 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555500-9xmr5" Mar 12 15:40:00 crc kubenswrapper[4832]: I0312 15:40:00.961368 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555500-9xmr5"] Mar 12 15:40:01 crc kubenswrapper[4832]: I0312 15:40:01.132687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555500-9xmr5" event={"ID":"4ebf363f-19b6-4ef3-bab3-231d3929f7fb","Type":"ContainerStarted","Data":"9b00adeaab2a39d1d97a278672a775fd7ef418c03560d24ecdc618bc82f98722"} Mar 12 15:40:03 crc kubenswrapper[4832]: I0312 15:40:03.152573 4832 generic.go:334] "Generic (PLEG): container finished" podID="4ebf363f-19b6-4ef3-bab3-231d3929f7fb" containerID="71edb25c5574fe9d493fa40c1f8a096bf902c1638177a8c22c8ad66bda2f8907" exitCode=0 Mar 12 15:40:03 crc kubenswrapper[4832]: I0312 15:40:03.152633 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555500-9xmr5" event={"ID":"4ebf363f-19b6-4ef3-bab3-231d3929f7fb","Type":"ContainerDied","Data":"71edb25c5574fe9d493fa40c1f8a096bf902c1638177a8c22c8ad66bda2f8907"} Mar 12 15:40:03 crc kubenswrapper[4832]: I0312 15:40:03.619993 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:40:03 crc kubenswrapper[4832]: E0312 15:40:03.620490 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:40:04 crc kubenswrapper[4832]: I0312 15:40:04.493144 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555500-9xmr5" Mar 12 15:40:04 crc kubenswrapper[4832]: I0312 15:40:04.662763 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc4dp\" (UniqueName: \"kubernetes.io/projected/4ebf363f-19b6-4ef3-bab3-231d3929f7fb-kube-api-access-zc4dp\") pod \"4ebf363f-19b6-4ef3-bab3-231d3929f7fb\" (UID: \"4ebf363f-19b6-4ef3-bab3-231d3929f7fb\") " Mar 12 15:40:04 crc kubenswrapper[4832]: I0312 15:40:04.669525 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebf363f-19b6-4ef3-bab3-231d3929f7fb-kube-api-access-zc4dp" (OuterVolumeSpecName: "kube-api-access-zc4dp") pod "4ebf363f-19b6-4ef3-bab3-231d3929f7fb" (UID: "4ebf363f-19b6-4ef3-bab3-231d3929f7fb"). InnerVolumeSpecName "kube-api-access-zc4dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:40:04 crc kubenswrapper[4832]: I0312 15:40:04.765414 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc4dp\" (UniqueName: \"kubernetes.io/projected/4ebf363f-19b6-4ef3-bab3-231d3929f7fb-kube-api-access-zc4dp\") on node \"crc\" DevicePath \"\"" Mar 12 15:40:05 crc kubenswrapper[4832]: I0312 15:40:05.175787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555500-9xmr5" event={"ID":"4ebf363f-19b6-4ef3-bab3-231d3929f7fb","Type":"ContainerDied","Data":"9b00adeaab2a39d1d97a278672a775fd7ef418c03560d24ecdc618bc82f98722"} Mar 12 15:40:05 crc kubenswrapper[4832]: I0312 15:40:05.175847 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b00adeaab2a39d1d97a278672a775fd7ef418c03560d24ecdc618bc82f98722" Mar 12 15:40:05 crc kubenswrapper[4832]: I0312 15:40:05.175894 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555500-9xmr5" Mar 12 15:40:05 crc kubenswrapper[4832]: I0312 15:40:05.576195 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555494-w59zw"] Mar 12 15:40:05 crc kubenswrapper[4832]: I0312 15:40:05.583715 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555494-w59zw"] Mar 12 15:40:06 crc kubenswrapper[4832]: I0312 15:40:06.642734 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa30eaf4-ab42-43ab-906d-0bc5919aded5" path="/var/lib/kubelet/pods/fa30eaf4-ab42-43ab-906d-0bc5919aded5/volumes" Mar 12 15:40:17 crc kubenswrapper[4832]: I0312 15:40:17.619905 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:40:17 crc kubenswrapper[4832]: E0312 15:40:17.620518 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:40:30 crc kubenswrapper[4832]: I0312 15:40:30.619836 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:40:30 crc kubenswrapper[4832]: E0312 15:40:30.620767 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:40:43 crc kubenswrapper[4832]: I0312 15:40:43.619704 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:40:43 crc kubenswrapper[4832]: E0312 15:40:43.620304 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:40:52 crc kubenswrapper[4832]: I0312 15:40:52.393175 4832 scope.go:117] "RemoveContainer" containerID="0a4d7631bbcd9beb1c65d7d4e3f5645b0a4916448311991c445b9ba0a78a4188" Mar 12 15:40:56 crc kubenswrapper[4832]: I0312 15:40:56.621406 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:40:56 crc kubenswrapper[4832]: E0312 15:40:56.622994 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:41:07 crc kubenswrapper[4832]: I0312 15:41:07.620348 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:41:07 crc kubenswrapper[4832]: E0312 15:41:07.621251 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.370708 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wc6h2"] Mar 12 15:41:12 crc kubenswrapper[4832]: E0312 15:41:12.372438 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebf363f-19b6-4ef3-bab3-231d3929f7fb" containerName="oc" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.372475 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebf363f-19b6-4ef3-bab3-231d3929f7fb" containerName="oc" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.373088 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebf363f-19b6-4ef3-bab3-231d3929f7fb" containerName="oc" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.376430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.389879 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wc6h2"] Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.405118 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-utilities\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.405214 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds68m\" (UniqueName: \"kubernetes.io/projected/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-kube-api-access-ds68m\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.405321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-catalog-content\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.506865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-catalog-content\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.507337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-utilities\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.507423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds68m\" (UniqueName: \"kubernetes.io/projected/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-kube-api-access-ds68m\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.508478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-catalog-content\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.508884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-utilities\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.530340 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds68m\" (UniqueName: \"kubernetes.io/projected/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-kube-api-access-ds68m\") pod \"community-operators-wc6h2\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:12 crc kubenswrapper[4832]: I0312 15:41:12.707334 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:13 crc kubenswrapper[4832]: I0312 15:41:13.225937 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wc6h2"] Mar 12 15:41:13 crc kubenswrapper[4832]: I0312 15:41:13.831451 4832 generic.go:334] "Generic (PLEG): container finished" podID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerID="6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636" exitCode=0 Mar 12 15:41:13 crc kubenswrapper[4832]: I0312 15:41:13.831762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc6h2" event={"ID":"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e","Type":"ContainerDied","Data":"6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636"} Mar 12 15:41:13 crc kubenswrapper[4832]: I0312 15:41:13.831796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc6h2" event={"ID":"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e","Type":"ContainerStarted","Data":"27d5e08dafd3bbe12d2efd5a41ea5337367cd20fe8b65f9d6dc3bd4c919b7dfa"} Mar 12 15:41:14 crc kubenswrapper[4832]: I0312 15:41:14.846287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc6h2" event={"ID":"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e","Type":"ContainerStarted","Data":"e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c"} Mar 12 15:41:16 crc kubenswrapper[4832]: I0312 15:41:16.872749 4832 generic.go:334] "Generic (PLEG): container finished" podID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerID="e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c" exitCode=0 Mar 12 15:41:16 crc kubenswrapper[4832]: I0312 15:41:16.873129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc6h2" event={"ID":"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e","Type":"ContainerDied","Data":"e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c"} Mar 12 15:41:17 crc kubenswrapper[4832]: I0312 15:41:17.884233 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc6h2" event={"ID":"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e","Type":"ContainerStarted","Data":"afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355"} Mar 12 15:41:17 crc kubenswrapper[4832]: I0312 15:41:17.909320 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wc6h2" podStartSLOduration=2.456696837 podStartE2EDuration="5.909303501s" podCreationTimestamp="2026-03-12 15:41:12 +0000 UTC" firstStartedPulling="2026-03-12 15:41:13.833824278 +0000 UTC m=+3232.477838504" lastFinishedPulling="2026-03-12 15:41:17.286430932 +0000 UTC m=+3235.930445168" observedRunningTime="2026-03-12 15:41:17.902490067 +0000 UTC m=+3236.546504313" watchObservedRunningTime="2026-03-12 15:41:17.909303501 +0000 UTC m=+3236.553317727" Mar 12 15:41:19 crc kubenswrapper[4832]: I0312 15:41:19.619624 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:41:19 crc kubenswrapper[4832]: E0312 15:41:19.620453 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:41:22 crc kubenswrapper[4832]: I0312 15:41:22.707742 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:22 crc kubenswrapper[4832]: I0312 15:41:22.708119 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:22 crc kubenswrapper[4832]: I0312 15:41:22.788060 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:23 crc kubenswrapper[4832]: I0312 15:41:23.047730 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:23 crc kubenswrapper[4832]: I0312 15:41:23.123277 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wc6h2"] Mar 12 15:41:24 crc kubenswrapper[4832]: I0312 15:41:24.986683 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wc6h2" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="registry-server" containerID="cri-o://afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355" gracePeriod=2 Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.561608 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.598626 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-catalog-content\") pod \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.598989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-utilities\") pod \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.599332 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds68m\" (UniqueName: \"kubernetes.io/projected/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-kube-api-access-ds68m\") pod \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\" (UID: \"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e\") " Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.600107 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-utilities" (OuterVolumeSpecName: "utilities") pod "65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" (UID: "65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.600656 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.607862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-kube-api-access-ds68m" (OuterVolumeSpecName: "kube-api-access-ds68m") pod "65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" (UID: "65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e"). InnerVolumeSpecName "kube-api-access-ds68m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.663375 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" (UID: "65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.703646 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds68m\" (UniqueName: \"kubernetes.io/projected/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-kube-api-access-ds68m\") on node \"crc\" DevicePath \"\"" Mar 12 15:41:25 crc kubenswrapper[4832]: I0312 15:41:25.703700 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.011346 4832 generic.go:334] "Generic (PLEG): container finished" podID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerID="afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355" exitCode=0 Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.011400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc6h2" event={"ID":"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e","Type":"ContainerDied","Data":"afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355"} Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.011426 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wc6h2" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.011452 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc6h2" event={"ID":"65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e","Type":"ContainerDied","Data":"27d5e08dafd3bbe12d2efd5a41ea5337367cd20fe8b65f9d6dc3bd4c919b7dfa"} Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.011541 4832 scope.go:117] "RemoveContainer" containerID="afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.047780 4832 scope.go:117] "RemoveContainer" containerID="e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.070321 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wc6h2"] Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.076547 4832 scope.go:117] "RemoveContainer" containerID="6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.080251 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wc6h2"] Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.130764 4832 scope.go:117] "RemoveContainer" containerID="afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355" Mar 12 15:41:26 crc kubenswrapper[4832]: E0312 15:41:26.131419 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355\": container with ID starting with afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355 not found: ID does not exist" containerID="afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.131453 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355"} err="failed to get container status \"afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355\": rpc error: code = NotFound desc = could not find container \"afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355\": container with ID starting with afd1ffccdd6e313d8075d3e87ffe5d55d990843bc47cb2d5155862ae4ab82355 not found: ID does not exist" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.131474 4832 scope.go:117] "RemoveContainer" containerID="e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c" Mar 12 15:41:26 crc kubenswrapper[4832]: E0312 15:41:26.131988 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c\": container with ID starting with e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c not found: ID does not exist" containerID="e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.132012 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c"} err="failed to get container status \"e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c\": rpc error: code = NotFound desc = could not find container \"e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c\": container with ID starting with e4256c1690d8a06e7336b1156ec71b8748ab05831bd7d98fbda2bf0b9497057c not found: ID does not exist" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.132031 4832 scope.go:117] "RemoveContainer" containerID="6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636" Mar 12 15:41:26 crc kubenswrapper[4832]: E0312 15:41:26.132281 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636\": container with ID starting with 6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636 not found: ID does not exist" containerID="6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.132304 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636"} err="failed to get container status \"6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636\": rpc error: code = NotFound desc = could not find container \"6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636\": container with ID starting with 6bcf664bd241f4e2b8c27f91ab41a7c719ca52cb7d5dd9369d73bc5b74cb6636 not found: ID does not exist" Mar 12 15:41:26 crc kubenswrapper[4832]: I0312 15:41:26.643085 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" path="/var/lib/kubelet/pods/65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e/volumes" Mar 12 15:41:33 crc kubenswrapper[4832]: I0312 15:41:33.620739 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:41:33 crc kubenswrapper[4832]: E0312 15:41:33.621448 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:41:46 crc kubenswrapper[4832]: I0312 15:41:46.620075 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:41:46 crc kubenswrapper[4832]: E0312 15:41:46.621218 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.210702 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555502-vt2sx"] Mar 12 15:42:00 crc kubenswrapper[4832]: E0312 15:42:00.211793 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="extract-content" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.211813 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="extract-content" Mar 12 15:42:00 crc kubenswrapper[4832]: E0312 15:42:00.211846 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="extract-utilities" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.211854 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="extract-utilities" Mar 12 15:42:00 crc kubenswrapper[4832]: E0312 15:42:00.211881 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="registry-server" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.211889 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="registry-server" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.212114 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c6c31b-1f29-44b0-a92b-1f3e1f53ce9e" containerName="registry-server" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.212968 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555502-vt2sx" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.216045 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.216248 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.216777 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.219317 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555502-vt2sx"] Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.347107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwlv\" (UniqueName: \"kubernetes.io/projected/de9797b7-dc13-4a96-a595-7a926c9881a3-kube-api-access-8wwlv\") pod \"auto-csr-approver-29555502-vt2sx\" (UID: \"de9797b7-dc13-4a96-a595-7a926c9881a3\") " pod="openshift-infra/auto-csr-approver-29555502-vt2sx" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.448459 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwlv\" (UniqueName: \"kubernetes.io/projected/de9797b7-dc13-4a96-a595-7a926c9881a3-kube-api-access-8wwlv\") pod \"auto-csr-approver-29555502-vt2sx\" (UID: \"de9797b7-dc13-4a96-a595-7a926c9881a3\") " pod="openshift-infra/auto-csr-approver-29555502-vt2sx" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.481451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwlv\" (UniqueName: \"kubernetes.io/projected/de9797b7-dc13-4a96-a595-7a926c9881a3-kube-api-access-8wwlv\") pod \"auto-csr-approver-29555502-vt2sx\" (UID: \"de9797b7-dc13-4a96-a595-7a926c9881a3\") " pod="openshift-infra/auto-csr-approver-29555502-vt2sx" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.540901 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555502-vt2sx" Mar 12 15:42:00 crc kubenswrapper[4832]: I0312 15:42:00.620248 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:42:01 crc kubenswrapper[4832]: I0312 15:42:01.025192 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555502-vt2sx"] Mar 12 15:42:01 crc kubenswrapper[4832]: I0312 15:42:01.384388 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"9c57499912bbde1c5ad6ba1ac6bbb5cfde685e2cc87fb12523a97e99c667c390"} Mar 12 15:42:01 crc kubenswrapper[4832]: I0312 15:42:01.390066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555502-vt2sx" event={"ID":"de9797b7-dc13-4a96-a595-7a926c9881a3","Type":"ContainerStarted","Data":"a40377a48e1ea5a5c7dfbac7a5ecd0cb11656263b16235e38df0c729bc4497ea"} Mar 12 15:42:03 crc kubenswrapper[4832]: I0312 15:42:03.432809 4832 generic.go:334] "Generic (PLEG): container finished" podID="de9797b7-dc13-4a96-a595-7a926c9881a3" containerID="252a02c259a85a809f4957d52872633ce3c55071be848e5f25674a5adafdbe77" exitCode=0 Mar 12 15:42:03 crc kubenswrapper[4832]: I0312 15:42:03.433271 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555502-vt2sx" event={"ID":"de9797b7-dc13-4a96-a595-7a926c9881a3","Type":"ContainerDied","Data":"252a02c259a85a809f4957d52872633ce3c55071be848e5f25674a5adafdbe77"} Mar 12 15:42:04 crc kubenswrapper[4832]: I0312 15:42:04.794986 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555502-vt2sx" Mar 12 15:42:04 crc kubenswrapper[4832]: I0312 15:42:04.851259 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wwlv\" (UniqueName: \"kubernetes.io/projected/de9797b7-dc13-4a96-a595-7a926c9881a3-kube-api-access-8wwlv\") pod \"de9797b7-dc13-4a96-a595-7a926c9881a3\" (UID: \"de9797b7-dc13-4a96-a595-7a926c9881a3\") " Mar 12 15:42:04 crc kubenswrapper[4832]: I0312 15:42:04.861717 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9797b7-dc13-4a96-a595-7a926c9881a3-kube-api-access-8wwlv" (OuterVolumeSpecName: "kube-api-access-8wwlv") pod "de9797b7-dc13-4a96-a595-7a926c9881a3" (UID: "de9797b7-dc13-4a96-a595-7a926c9881a3"). InnerVolumeSpecName "kube-api-access-8wwlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:42:04 crc kubenswrapper[4832]: I0312 15:42:04.953907 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wwlv\" (UniqueName: \"kubernetes.io/projected/de9797b7-dc13-4a96-a595-7a926c9881a3-kube-api-access-8wwlv\") on node \"crc\" DevicePath \"\"" Mar 12 15:42:05 crc kubenswrapper[4832]: I0312 15:42:05.455912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555502-vt2sx" event={"ID":"de9797b7-dc13-4a96-a595-7a926c9881a3","Type":"ContainerDied","Data":"a40377a48e1ea5a5c7dfbac7a5ecd0cb11656263b16235e38df0c729bc4497ea"} Mar 12 15:42:05 crc kubenswrapper[4832]: I0312 15:42:05.455969 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40377a48e1ea5a5c7dfbac7a5ecd0cb11656263b16235e38df0c729bc4497ea" Mar 12 15:42:05 crc kubenswrapper[4832]: I0312 15:42:05.455998 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555502-vt2sx" Mar 12 15:42:05 crc kubenswrapper[4832]: I0312 15:42:05.891605 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555496-vptrr"] Mar 12 15:42:05 crc kubenswrapper[4832]: I0312 15:42:05.900709 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555496-vptrr"] Mar 12 15:42:06 crc kubenswrapper[4832]: I0312 15:42:06.634700 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e231000f-90b7-4df8-bfc1-59097f15209f" path="/var/lib/kubelet/pods/e231000f-90b7-4df8-bfc1-59097f15209f/volumes" Mar 12 15:42:52 crc kubenswrapper[4832]: I0312 15:42:52.491986 4832 scope.go:117] "RemoveContainer" containerID="0b354a3da76f04fe9edf61549275c2d4a2640359062ded8842ae06bcf8ce2e14" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.151745 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555504-tqq42"] Mar 12 15:44:00 crc kubenswrapper[4832]: E0312 15:44:00.152606 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9797b7-dc13-4a96-a595-7a926c9881a3" containerName="oc" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.152617 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9797b7-dc13-4a96-a595-7a926c9881a3" containerName="oc" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.152804 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9797b7-dc13-4a96-a595-7a926c9881a3" containerName="oc" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.153415 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555504-tqq42" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.156423 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.158783 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.161472 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.168546 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555504-tqq42"] Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.331822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqq7\" (UniqueName: \"kubernetes.io/projected/3477640f-166b-4c95-bb4e-dda23fe29206-kube-api-access-4zqq7\") pod \"auto-csr-approver-29555504-tqq42\" (UID: \"3477640f-166b-4c95-bb4e-dda23fe29206\") " pod="openshift-infra/auto-csr-approver-29555504-tqq42" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.433840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zqq7\" (UniqueName: \"kubernetes.io/projected/3477640f-166b-4c95-bb4e-dda23fe29206-kube-api-access-4zqq7\") pod \"auto-csr-approver-29555504-tqq42\" (UID: \"3477640f-166b-4c95-bb4e-dda23fe29206\") " pod="openshift-infra/auto-csr-approver-29555504-tqq42" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.458700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zqq7\" (UniqueName: \"kubernetes.io/projected/3477640f-166b-4c95-bb4e-dda23fe29206-kube-api-access-4zqq7\") pod \"auto-csr-approver-29555504-tqq42\" (UID: \"3477640f-166b-4c95-bb4e-dda23fe29206\") " pod="openshift-infra/auto-csr-approver-29555504-tqq42" Mar 12 15:44:00 crc kubenswrapper[4832]: I0312 15:44:00.473339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555504-tqq42" Mar 12 15:44:01 crc kubenswrapper[4832]: I0312 15:44:01.005009 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555504-tqq42"] Mar 12 15:44:01 crc kubenswrapper[4832]: W0312 15:44:01.011879 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3477640f_166b_4c95_bb4e_dda23fe29206.slice/crio-71817fed03716ff9a75af86d4444fd0f594411ccefb78d0a133030fff2fd3279 WatchSource:0}: Error finding container 71817fed03716ff9a75af86d4444fd0f594411ccefb78d0a133030fff2fd3279: Status 404 returned error can't find the container with id 71817fed03716ff9a75af86d4444fd0f594411ccefb78d0a133030fff2fd3279 Mar 12 15:44:01 crc kubenswrapper[4832]: I0312 15:44:01.014372 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:44:01 crc kubenswrapper[4832]: I0312 15:44:01.767370 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555504-tqq42" event={"ID":"3477640f-166b-4c95-bb4e-dda23fe29206","Type":"ContainerStarted","Data":"71817fed03716ff9a75af86d4444fd0f594411ccefb78d0a133030fff2fd3279"} Mar 12 15:44:02 crc kubenswrapper[4832]: I0312 15:44:02.776673 4832 generic.go:334] "Generic (PLEG): container finished" podID="3477640f-166b-4c95-bb4e-dda23fe29206" containerID="89b8612775d210ae445c852b19b02962f7ab4b8d19eaa870bcdb59888fbe59e9" exitCode=0 Mar 12 15:44:02 crc kubenswrapper[4832]: I0312 15:44:02.776927 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555504-tqq42" event={"ID":"3477640f-166b-4c95-bb4e-dda23fe29206","Type":"ContainerDied","Data":"89b8612775d210ae445c852b19b02962f7ab4b8d19eaa870bcdb59888fbe59e9"} Mar 12 15:44:04 crc kubenswrapper[4832]: I0312 15:44:04.275017 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555504-tqq42" Mar 12 15:44:04 crc kubenswrapper[4832]: I0312 15:44:04.420316 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zqq7\" (UniqueName: \"kubernetes.io/projected/3477640f-166b-4c95-bb4e-dda23fe29206-kube-api-access-4zqq7\") pod \"3477640f-166b-4c95-bb4e-dda23fe29206\" (UID: \"3477640f-166b-4c95-bb4e-dda23fe29206\") " Mar 12 15:44:04 crc kubenswrapper[4832]: I0312 15:44:04.427123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3477640f-166b-4c95-bb4e-dda23fe29206-kube-api-access-4zqq7" (OuterVolumeSpecName: "kube-api-access-4zqq7") pod "3477640f-166b-4c95-bb4e-dda23fe29206" (UID: "3477640f-166b-4c95-bb4e-dda23fe29206"). InnerVolumeSpecName "kube-api-access-4zqq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:44:04 crc kubenswrapper[4832]: I0312 15:44:04.523174 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zqq7\" (UniqueName: \"kubernetes.io/projected/3477640f-166b-4c95-bb4e-dda23fe29206-kube-api-access-4zqq7\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:04 crc kubenswrapper[4832]: I0312 15:44:04.796822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555504-tqq42" event={"ID":"3477640f-166b-4c95-bb4e-dda23fe29206","Type":"ContainerDied","Data":"71817fed03716ff9a75af86d4444fd0f594411ccefb78d0a133030fff2fd3279"} Mar 12 15:44:04 crc kubenswrapper[4832]: I0312 15:44:04.796875 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71817fed03716ff9a75af86d4444fd0f594411ccefb78d0a133030fff2fd3279" Mar 12 15:44:04 crc kubenswrapper[4832]: I0312 15:44:04.796900 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555504-tqq42" Mar 12 15:44:05 crc kubenswrapper[4832]: I0312 15:44:05.362614 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555498-z2ptk"] Mar 12 15:44:05 crc kubenswrapper[4832]: I0312 15:44:05.378317 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555498-z2ptk"] Mar 12 15:44:06 crc kubenswrapper[4832]: I0312 15:44:06.636439 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559b4600-c134-4a7a-ad81-853daec70098" path="/var/lib/kubelet/pods/559b4600-c134-4a7a-ad81-853daec70098/volumes" Mar 12 15:44:26 crc kubenswrapper[4832]: I0312 15:44:26.314572 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:44:26 crc kubenswrapper[4832]: I0312 15:44:26.315062 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:44:31 crc kubenswrapper[4832]: I0312 15:44:31.059734 4832 generic.go:334] "Generic (PLEG): container finished" podID="5d12cc2d-980d-4992-ac59-1d874529ad70" containerID="18e9f3223c4ba8c69a4697aa9770d4aa346caf86ec318f691dfd52d0c27a6ed5" exitCode=0 Mar 12 15:44:31 crc kubenswrapper[4832]: I0312 15:44:31.059884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d12cc2d-980d-4992-ac59-1d874529ad70","Type":"ContainerDied","Data":"18e9f3223c4ba8c69a4697aa9770d4aa346caf86ec318f691dfd52d0c27a6ed5"} Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.495895 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533029 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ca-certs\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533084 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-temporary\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533127 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533787 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-workdir\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533827 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config-secret\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533888 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ssh-key\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533941 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkmcv\" (UniqueName: \"kubernetes.io/projected/5d12cc2d-980d-4992-ac59-1d874529ad70-kube-api-access-nkmcv\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.533959 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-config-data\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.534034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.534061 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config\") pod \"5d12cc2d-980d-4992-ac59-1d874529ad70\" (UID: \"5d12cc2d-980d-4992-ac59-1d874529ad70\") " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.534956 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.540712 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.541632 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d12cc2d-980d-4992-ac59-1d874529ad70-kube-api-access-nkmcv" (OuterVolumeSpecName: "kube-api-access-nkmcv") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "kube-api-access-nkmcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.542616 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.544160 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-config-data" (OuterVolumeSpecName: "config-data") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.565310 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.568420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.576178 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.595940 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5d12cc2d-980d-4992-ac59-1d874529ad70" (UID: "5d12cc2d-980d-4992-ac59-1d874529ad70"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636814 4832 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636860 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636871 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d12cc2d-980d-4992-ac59-1d874529ad70-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636882 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636891 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d12cc2d-980d-4992-ac59-1d874529ad70-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636901 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkmcv\" (UniqueName: \"kubernetes.io/projected/5d12cc2d-980d-4992-ac59-1d874529ad70-kube-api-access-nkmcv\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636909 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.636917 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d12cc2d-980d-4992-ac59-1d874529ad70-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.659371 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 12 15:44:32 crc kubenswrapper[4832]: I0312 15:44:32.738421 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:44:33 crc kubenswrapper[4832]: I0312 15:44:33.082678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d12cc2d-980d-4992-ac59-1d874529ad70","Type":"ContainerDied","Data":"74172087e3ae04841d0f0e01a2935fcc0b9d2dfc8e06e1796ba167aba3fc1818"} Mar 12 15:44:33 crc kubenswrapper[4832]: I0312 15:44:33.082731 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74172087e3ae04841d0f0e01a2935fcc0b9d2dfc8e06e1796ba167aba3fc1818" Mar 12 15:44:33 crc kubenswrapper[4832]: I0312 15:44:33.082734 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.964686 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 15:44:38 crc kubenswrapper[4832]: E0312 15:44:38.965891 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3477640f-166b-4c95-bb4e-dda23fe29206" containerName="oc" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.965911 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3477640f-166b-4c95-bb4e-dda23fe29206" containerName="oc" Mar 12 15:44:38 crc kubenswrapper[4832]: E0312 15:44:38.965942 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d12cc2d-980d-4992-ac59-1d874529ad70" containerName="tempest-tests-tempest-tests-runner" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.965956 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d12cc2d-980d-4992-ac59-1d874529ad70" containerName="tempest-tests-tempest-tests-runner" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.966284 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d12cc2d-980d-4992-ac59-1d874529ad70" containerName="tempest-tests-tempest-tests-runner" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.966313 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3477640f-166b-4c95-bb4e-dda23fe29206" containerName="oc" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.967419 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.970992 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qwgpm" Mar 12 15:44:38 crc kubenswrapper[4832]: I0312 15:44:38.982792 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.157826 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnthd\" (UniqueName: \"kubernetes.io/projected/b5c4dbfb-fd36-43ed-a327-0a01fe766188-kube-api-access-fnthd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5c4dbfb-fd36-43ed-a327-0a01fe766188\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.157961 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5c4dbfb-fd36-43ed-a327-0a01fe766188\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.259347 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnthd\" (UniqueName: \"kubernetes.io/projected/b5c4dbfb-fd36-43ed-a327-0a01fe766188-kube-api-access-fnthd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5c4dbfb-fd36-43ed-a327-0a01fe766188\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.259669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5c4dbfb-fd36-43ed-a327-0a01fe766188\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.260058 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5c4dbfb-fd36-43ed-a327-0a01fe766188\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.287175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnthd\" (UniqueName: \"kubernetes.io/projected/b5c4dbfb-fd36-43ed-a327-0a01fe766188-kube-api-access-fnthd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5c4dbfb-fd36-43ed-a327-0a01fe766188\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.295784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5c4dbfb-fd36-43ed-a327-0a01fe766188\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.317128 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:44:39 crc kubenswrapper[4832]: I0312 15:44:39.757387 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 15:44:40 crc kubenswrapper[4832]: I0312 15:44:40.167872 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b5c4dbfb-fd36-43ed-a327-0a01fe766188","Type":"ContainerStarted","Data":"83984c898d1477ded0b8c9d8ae2331792e34d65bcc282834accfb0873653d886"} Mar 12 15:44:41 crc kubenswrapper[4832]: I0312 15:44:41.181272 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b5c4dbfb-fd36-43ed-a327-0a01fe766188","Type":"ContainerStarted","Data":"80d122fb724086b2b61a0a8f6d0a0f6f4e654e045ae02bf853b8136b3543b51d"} Mar 12 15:44:41 crc kubenswrapper[4832]: I0312 15:44:41.211071 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.429518554 podStartE2EDuration="3.211051848s" podCreationTimestamp="2026-03-12 15:44:38 +0000 UTC" firstStartedPulling="2026-03-12 15:44:39.76394129 +0000 UTC m=+3438.407955546" lastFinishedPulling="2026-03-12 15:44:40.545474614 +0000 UTC m=+3439.189488840" observedRunningTime="2026-03-12 15:44:41.207208369 +0000 UTC m=+3439.851222635" watchObservedRunningTime="2026-03-12 15:44:41.211051848 +0000 UTC m=+3439.855066084" Mar 12 15:44:52 crc kubenswrapper[4832]: I0312 15:44:52.606416 4832 scope.go:117] "RemoveContainer" containerID="495ec2dfe1f876146886813b7a40cad2483a8df79f062d62feedf1efdc975bf7" Mar 12 15:44:56 crc kubenswrapper[4832]: I0312 15:44:56.314807 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:44:56 crc kubenswrapper[4832]: I0312 15:44:56.315276 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.146191 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr"] Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.148698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.151004 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.151160 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.156005 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr"] Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.292200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rff\" (UniqueName: \"kubernetes.io/projected/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-kube-api-access-d7rff\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.292276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-secret-volume\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.292478 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-config-volume\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.393666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rff\" (UniqueName: \"kubernetes.io/projected/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-kube-api-access-d7rff\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.393722 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-secret-volume\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.393833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-config-volume\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.395153 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-config-volume\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.407393 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-secret-volume\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.411149 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rff\" (UniqueName: \"kubernetes.io/projected/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-kube-api-access-d7rff\") pod \"collect-profiles-29555505-csbmr\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:00 crc kubenswrapper[4832]: I0312 15:45:00.522679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:01 crc kubenswrapper[4832]: I0312 15:45:01.004444 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr"] Mar 12 15:45:01 crc kubenswrapper[4832]: W0312 15:45:01.007602 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9847491d_23c2_43c7_bbd2_c1411e9e1bbf.slice/crio-2ebe5d843a43b26d3fd2f2d561301654e72022abbe8c2c009035d77601e20ca2 WatchSource:0}: Error finding container 2ebe5d843a43b26d3fd2f2d561301654e72022abbe8c2c009035d77601e20ca2: Status 404 returned error can't find the container with id 2ebe5d843a43b26d3fd2f2d561301654e72022abbe8c2c009035d77601e20ca2 Mar 12 15:45:01 crc kubenswrapper[4832]: I0312 15:45:01.406098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" event={"ID":"9847491d-23c2-43c7-bbd2-c1411e9e1bbf","Type":"ContainerStarted","Data":"183fff5e6a217a4284fde2300220d1c4bce6fa21c66c9b1888ecfa54605a7e16"} Mar 12 15:45:01 crc kubenswrapper[4832]: I0312 15:45:01.406628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" event={"ID":"9847491d-23c2-43c7-bbd2-c1411e9e1bbf","Type":"ContainerStarted","Data":"2ebe5d843a43b26d3fd2f2d561301654e72022abbe8c2c009035d77601e20ca2"} Mar 12 15:45:01 crc kubenswrapper[4832]: I0312 15:45:01.433690 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" podStartSLOduration=1.4336720889999999 podStartE2EDuration="1.433672089s" podCreationTimestamp="2026-03-12 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:45:01.428075109 +0000 UTC m=+3460.072089335" watchObservedRunningTime="2026-03-12 15:45:01.433672089 +0000 UTC m=+3460.077686315" Mar 12 15:45:02 crc kubenswrapper[4832]: I0312 15:45:02.419084 4832 generic.go:334] "Generic (PLEG): container finished" podID="9847491d-23c2-43c7-bbd2-c1411e9e1bbf" containerID="183fff5e6a217a4284fde2300220d1c4bce6fa21c66c9b1888ecfa54605a7e16" exitCode=0 Mar 12 15:45:02 crc kubenswrapper[4832]: I0312 15:45:02.419175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" event={"ID":"9847491d-23c2-43c7-bbd2-c1411e9e1bbf","Type":"ContainerDied","Data":"183fff5e6a217a4284fde2300220d1c4bce6fa21c66c9b1888ecfa54605a7e16"} Mar 12 15:45:03 crc kubenswrapper[4832]: I0312 15:45:03.975278 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.071051 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-config-volume\") pod \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.071208 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7rff\" (UniqueName: \"kubernetes.io/projected/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-kube-api-access-d7rff\") pod \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.071264 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-secret-volume\") pod \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\" (UID: \"9847491d-23c2-43c7-bbd2-c1411e9e1bbf\") " Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.071766 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-config-volume" (OuterVolumeSpecName: "config-volume") pod "9847491d-23c2-43c7-bbd2-c1411e9e1bbf" (UID: "9847491d-23c2-43c7-bbd2-c1411e9e1bbf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.087859 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9847491d-23c2-43c7-bbd2-c1411e9e1bbf" (UID: "9847491d-23c2-43c7-bbd2-c1411e9e1bbf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.089621 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-kube-api-access-d7rff" (OuterVolumeSpecName: "kube-api-access-d7rff") pod "9847491d-23c2-43c7-bbd2-c1411e9e1bbf" (UID: "9847491d-23c2-43c7-bbd2-c1411e9e1bbf"). InnerVolumeSpecName "kube-api-access-d7rff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.174329 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7rff\" (UniqueName: \"kubernetes.io/projected/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-kube-api-access-d7rff\") on node \"crc\" DevicePath \"\"" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.174686 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.174808 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9847491d-23c2-43c7-bbd2-c1411e9e1bbf-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.436752 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" event={"ID":"9847491d-23c2-43c7-bbd2-c1411e9e1bbf","Type":"ContainerDied","Data":"2ebe5d843a43b26d3fd2f2d561301654e72022abbe8c2c009035d77601e20ca2"} Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.436788 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ebe5d843a43b26d3fd2f2d561301654e72022abbe8c2c009035d77601e20ca2" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.436839 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555505-csbmr" Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.501379 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9"] Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.509379 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-49rh9"] Mar 12 15:45:04 crc kubenswrapper[4832]: I0312 15:45:04.637641 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683ab2e5-03a0-46dd-87df-0d785d36f2d1" path="/var/lib/kubelet/pods/683ab2e5-03a0-46dd-87df-0d785d36f2d1/volumes" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.315066 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rqgtd/must-gather-2wnvd"] Mar 12 15:45:16 crc kubenswrapper[4832]: E0312 15:45:16.315999 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9847491d-23c2-43c7-bbd2-c1411e9e1bbf" containerName="collect-profiles" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.316014 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9847491d-23c2-43c7-bbd2-c1411e9e1bbf" containerName="collect-profiles" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.316225 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9847491d-23c2-43c7-bbd2-c1411e9e1bbf" containerName="collect-profiles" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.317387 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.332543 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rqgtd/must-gather-2wnvd"] Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.337986 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rqgtd"/"kube-root-ca.crt" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.338776 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rqgtd"/"openshift-service-ca.crt" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.453981 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/43db85df-6864-49e9-9b60-eb6050c1cb01-must-gather-output\") pod \"must-gather-2wnvd\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.454382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5tfq\" (UniqueName: \"kubernetes.io/projected/43db85df-6864-49e9-9b60-eb6050c1cb01-kube-api-access-k5tfq\") pod \"must-gather-2wnvd\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.556165 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5tfq\" (UniqueName: \"kubernetes.io/projected/43db85df-6864-49e9-9b60-eb6050c1cb01-kube-api-access-k5tfq\") pod \"must-gather-2wnvd\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.556318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/43db85df-6864-49e9-9b60-eb6050c1cb01-must-gather-output\") pod \"must-gather-2wnvd\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.556933 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/43db85df-6864-49e9-9b60-eb6050c1cb01-must-gather-output\") pod \"must-gather-2wnvd\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.575674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5tfq\" (UniqueName: \"kubernetes.io/projected/43db85df-6864-49e9-9b60-eb6050c1cb01-kube-api-access-k5tfq\") pod \"must-gather-2wnvd\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:16 crc kubenswrapper[4832]: I0312 15:45:16.643198 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:45:17 crc kubenswrapper[4832]: I0312 15:45:17.176912 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rqgtd/must-gather-2wnvd"] Mar 12 15:45:17 crc kubenswrapper[4832]: W0312 15:45:17.179148 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43db85df_6864_49e9_9b60_eb6050c1cb01.slice/crio-21e7a37bff2a36f91298c7b6351cb13e210b81bb89a055169416424959d68add WatchSource:0}: Error finding container 21e7a37bff2a36f91298c7b6351cb13e210b81bb89a055169416424959d68add: Status 404 returned error can't find the container with id 21e7a37bff2a36f91298c7b6351cb13e210b81bb89a055169416424959d68add Mar 12 15:45:17 crc kubenswrapper[4832]: I0312 15:45:17.584264 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" event={"ID":"43db85df-6864-49e9-9b60-eb6050c1cb01","Type":"ContainerStarted","Data":"21e7a37bff2a36f91298c7b6351cb13e210b81bb89a055169416424959d68add"} Mar 12 15:45:23 crc kubenswrapper[4832]: I0312 15:45:23.644413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" event={"ID":"43db85df-6864-49e9-9b60-eb6050c1cb01","Type":"ContainerStarted","Data":"114d78f404e7f547b4b8cb9eb38dec53f5379b736ab1410a1128490926d7a7a8"} Mar 12 15:45:23 crc kubenswrapper[4832]: I0312 15:45:23.645105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" event={"ID":"43db85df-6864-49e9-9b60-eb6050c1cb01","Type":"ContainerStarted","Data":"c3aa646b371773e2fac4277a42c16631403b65634c46e6bde4bc308e1a92b090"} Mar 12 15:45:23 crc kubenswrapper[4832]: I0312 15:45:23.662071 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" podStartSLOduration=1.81188477 podStartE2EDuration="7.662055171s" podCreationTimestamp="2026-03-12 15:45:16 +0000 UTC" firstStartedPulling="2026-03-12 15:45:17.181620944 +0000 UTC m=+3475.825635180" lastFinishedPulling="2026-03-12 15:45:23.031791365 +0000 UTC m=+3481.675805581" observedRunningTime="2026-03-12 15:45:23.659264792 +0000 UTC m=+3482.303279028" watchObservedRunningTime="2026-03-12 15:45:23.662055171 +0000 UTC m=+3482.306069397" Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.314257 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.314764 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.314804 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.315492 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c57499912bbde1c5ad6ba1ac6bbb5cfde685e2cc87fb12523a97e99c667c390"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.315565 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://9c57499912bbde1c5ad6ba1ac6bbb5cfde685e2cc87fb12523a97e99c667c390" gracePeriod=600 Mar 12 15:45:26 crc kubenswrapper[4832]: E0312 15:45:26.401382 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.227:53700->38.102.83.227:34635: write tcp 38.102.83.227:53700->38.102.83.227:34635: write: broken pipe Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.675825 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="9c57499912bbde1c5ad6ba1ac6bbb5cfde685e2cc87fb12523a97e99c667c390" exitCode=0 Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.675908 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"9c57499912bbde1c5ad6ba1ac6bbb5cfde685e2cc87fb12523a97e99c667c390"} Mar 12 15:45:26 crc kubenswrapper[4832]: I0312 15:45:26.676203 4832 scope.go:117] "RemoveContainer" containerID="91a6835697ae1b5f51cf7551485f9f890bf709f9bbf936ab34fb86e6ed37ae05" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.234024 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-g94wx"] Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.235086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.237902 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rqgtd"/"default-dockercfg-zcgnj" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.385691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9dc3f33-b141-4399-ab60-76a3df3083c0-host\") pod \"crc-debug-g94wx\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.385777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfmk\" (UniqueName: \"kubernetes.io/projected/c9dc3f33-b141-4399-ab60-76a3df3083c0-kube-api-access-5xfmk\") pod \"crc-debug-g94wx\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.487067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9dc3f33-b141-4399-ab60-76a3df3083c0-host\") pod \"crc-debug-g94wx\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.487144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xfmk\" (UniqueName: \"kubernetes.io/projected/c9dc3f33-b141-4399-ab60-76a3df3083c0-kube-api-access-5xfmk\") pod \"crc-debug-g94wx\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.487220 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9dc3f33-b141-4399-ab60-76a3df3083c0-host\") pod \"crc-debug-g94wx\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.507081 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xfmk\" (UniqueName: \"kubernetes.io/projected/c9dc3f33-b141-4399-ab60-76a3df3083c0-kube-api-access-5xfmk\") pod \"crc-debug-g94wx\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.551005 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.685635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85"} Mar 12 15:45:27 crc kubenswrapper[4832]: I0312 15:45:27.686687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" event={"ID":"c9dc3f33-b141-4399-ab60-76a3df3083c0","Type":"ContainerStarted","Data":"9dd14b530558b3192fc2cbd8eb6e0744dda627ae8e19071c444fabe0ebc48611"} Mar 12 15:45:39 crc kubenswrapper[4832]: I0312 15:45:39.793365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" event={"ID":"c9dc3f33-b141-4399-ab60-76a3df3083c0","Type":"ContainerStarted","Data":"3bb162359f32ab8a31e4d75fc1f5d18364474959900276a5b69783a44bcbf32c"} Mar 12 15:45:39 crc kubenswrapper[4832]: I0312 15:45:39.812206 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" podStartSLOduration=1.714684713 podStartE2EDuration="12.812186832s" podCreationTimestamp="2026-03-12 15:45:27 +0000 UTC" firstStartedPulling="2026-03-12 15:45:27.583245624 +0000 UTC m=+3486.227259840" lastFinishedPulling="2026-03-12 15:45:38.680747733 +0000 UTC m=+3497.324761959" observedRunningTime="2026-03-12 15:45:39.808765014 +0000 UTC m=+3498.452779240" watchObservedRunningTime="2026-03-12 15:45:39.812186832 +0000 UTC m=+3498.456201068" Mar 12 15:45:52 crc kubenswrapper[4832]: I0312 15:45:52.699577 4832 scope.go:117] "RemoveContainer" containerID="ddb538c08c232b60e0e7d25c4bb64889f4d9a4045306cea03339db95d4b37a2a" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.140078 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555506-x6jxr"] Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.145924 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555506-x6jxr" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.154100 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.154370 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.154694 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.159234 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555506-x6jxr"] Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.220707 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqq7\" (UniqueName: \"kubernetes.io/projected/ca28e44e-2492-499c-a91f-f48e4283db6d-kube-api-access-9gqq7\") pod \"auto-csr-approver-29555506-x6jxr\" (UID: \"ca28e44e-2492-499c-a91f-f48e4283db6d\") " pod="openshift-infra/auto-csr-approver-29555506-x6jxr" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.323189 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gqq7\" (UniqueName: \"kubernetes.io/projected/ca28e44e-2492-499c-a91f-f48e4283db6d-kube-api-access-9gqq7\") pod \"auto-csr-approver-29555506-x6jxr\" (UID: \"ca28e44e-2492-499c-a91f-f48e4283db6d\") " pod="openshift-infra/auto-csr-approver-29555506-x6jxr" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.353309 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gqq7\" (UniqueName: \"kubernetes.io/projected/ca28e44e-2492-499c-a91f-f48e4283db6d-kube-api-access-9gqq7\") pod \"auto-csr-approver-29555506-x6jxr\" (UID: \"ca28e44e-2492-499c-a91f-f48e4283db6d\") " pod="openshift-infra/auto-csr-approver-29555506-x6jxr" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.478525 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555506-x6jxr" Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.951391 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555506-x6jxr"] Mar 12 15:46:00 crc kubenswrapper[4832]: W0312 15:46:00.960060 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca28e44e_2492_499c_a91f_f48e4283db6d.slice/crio-bceb083b30d2fa21f21f4f416d0c1716c2348b62306d34a6f61b6db3e432898b WatchSource:0}: Error finding container bceb083b30d2fa21f21f4f416d0c1716c2348b62306d34a6f61b6db3e432898b: Status 404 returned error can't find the container with id bceb083b30d2fa21f21f4f416d0c1716c2348b62306d34a6f61b6db3e432898b Mar 12 15:46:00 crc kubenswrapper[4832]: I0312 15:46:00.984848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555506-x6jxr" event={"ID":"ca28e44e-2492-499c-a91f-f48e4283db6d","Type":"ContainerStarted","Data":"bceb083b30d2fa21f21f4f416d0c1716c2348b62306d34a6f61b6db3e432898b"} Mar 12 15:46:03 crc kubenswrapper[4832]: I0312 15:46:03.005719 4832 generic.go:334] "Generic (PLEG): container finished" podID="ca28e44e-2492-499c-a91f-f48e4283db6d" containerID="89667e7bd610354763bc6e5758e2ad4743e2988b706dfa10a9e57613d486693e" exitCode=0 Mar 12 15:46:03 crc kubenswrapper[4832]: I0312 15:46:03.005832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555506-x6jxr" event={"ID":"ca28e44e-2492-499c-a91f-f48e4283db6d","Type":"ContainerDied","Data":"89667e7bd610354763bc6e5758e2ad4743e2988b706dfa10a9e57613d486693e"} Mar 12 15:46:04 crc kubenswrapper[4832]: I0312 15:46:04.352309 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555506-x6jxr" Mar 12 15:46:04 crc kubenswrapper[4832]: I0312 15:46:04.397425 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gqq7\" (UniqueName: \"kubernetes.io/projected/ca28e44e-2492-499c-a91f-f48e4283db6d-kube-api-access-9gqq7\") pod \"ca28e44e-2492-499c-a91f-f48e4283db6d\" (UID: \"ca28e44e-2492-499c-a91f-f48e4283db6d\") " Mar 12 15:46:04 crc kubenswrapper[4832]: I0312 15:46:04.406887 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca28e44e-2492-499c-a91f-f48e4283db6d-kube-api-access-9gqq7" (OuterVolumeSpecName: "kube-api-access-9gqq7") pod "ca28e44e-2492-499c-a91f-f48e4283db6d" (UID: "ca28e44e-2492-499c-a91f-f48e4283db6d"). InnerVolumeSpecName "kube-api-access-9gqq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:46:04 crc kubenswrapper[4832]: I0312 15:46:04.499621 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gqq7\" (UniqueName: \"kubernetes.io/projected/ca28e44e-2492-499c-a91f-f48e4283db6d-kube-api-access-9gqq7\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:05 crc kubenswrapper[4832]: I0312 15:46:05.025073 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555506-x6jxr" event={"ID":"ca28e44e-2492-499c-a91f-f48e4283db6d","Type":"ContainerDied","Data":"bceb083b30d2fa21f21f4f416d0c1716c2348b62306d34a6f61b6db3e432898b"} Mar 12 15:46:05 crc kubenswrapper[4832]: I0312 15:46:05.025105 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555506-x6jxr" Mar 12 15:46:05 crc kubenswrapper[4832]: I0312 15:46:05.025119 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bceb083b30d2fa21f21f4f416d0c1716c2348b62306d34a6f61b6db3e432898b" Mar 12 15:46:05 crc kubenswrapper[4832]: I0312 15:46:05.429901 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555500-9xmr5"] Mar 12 15:46:05 crc kubenswrapper[4832]: I0312 15:46:05.441643 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555500-9xmr5"] Mar 12 15:46:06 crc kubenswrapper[4832]: I0312 15:46:06.633604 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebf363f-19b6-4ef3-bab3-231d3929f7fb" path="/var/lib/kubelet/pods/4ebf363f-19b6-4ef3-bab3-231d3929f7fb/volumes" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.719098 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xnbjm"] Mar 12 15:46:14 crc kubenswrapper[4832]: E0312 15:46:14.720029 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca28e44e-2492-499c-a91f-f48e4283db6d" containerName="oc" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.720042 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca28e44e-2492-499c-a91f-f48e4283db6d" containerName="oc" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.720246 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca28e44e-2492-499c-a91f-f48e4283db6d" containerName="oc" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.721608 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.737067 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xnbjm"] Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.825003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6cdd\" (UniqueName: \"kubernetes.io/projected/e9a17575-0ffe-4286-bc78-7b4370a06d43-kube-api-access-w6cdd\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.825098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-utilities\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.825126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-catalog-content\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.927044 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6cdd\" (UniqueName: \"kubernetes.io/projected/e9a17575-0ffe-4286-bc78-7b4370a06d43-kube-api-access-w6cdd\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.927119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-utilities\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.927142 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-catalog-content\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.927621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-catalog-content\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.928417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-utilities\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:14 crc kubenswrapper[4832]: I0312 15:46:14.951592 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6cdd\" (UniqueName: \"kubernetes.io/projected/e9a17575-0ffe-4286-bc78-7b4370a06d43-kube-api-access-w6cdd\") pod \"redhat-operators-xnbjm\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:15 crc kubenswrapper[4832]: I0312 15:46:15.050107 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:15 crc kubenswrapper[4832]: I0312 15:46:15.549852 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xnbjm"] Mar 12 15:46:16 crc kubenswrapper[4832]: I0312 15:46:16.320003 4832 generic.go:334] "Generic (PLEG): container finished" podID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerID="1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d" exitCode=0 Mar 12 15:46:16 crc kubenswrapper[4832]: I0312 15:46:16.320104 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnbjm" event={"ID":"e9a17575-0ffe-4286-bc78-7b4370a06d43","Type":"ContainerDied","Data":"1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d"} Mar 12 15:46:16 crc kubenswrapper[4832]: I0312 15:46:16.320348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnbjm" event={"ID":"e9a17575-0ffe-4286-bc78-7b4370a06d43","Type":"ContainerStarted","Data":"6f2727cb6adcccffa485f8f8fe48bb811f42d7a1a32fcd2418bbf6ebf6617436"} Mar 12 15:46:16 crc kubenswrapper[4832]: I0312 15:46:16.912703 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4lmf"] Mar 12 15:46:16 crc kubenswrapper[4832]: I0312 15:46:16.918841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:16 crc kubenswrapper[4832]: I0312 15:46:16.934151 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4lmf"] Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.071023 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-catalog-content\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.071544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-utilities\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.071651 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g7g5\" (UniqueName: \"kubernetes.io/projected/b694af35-a98e-4b41-a10e-88995b1398a1-kube-api-access-9g7g5\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.173381 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7g5\" (UniqueName: \"kubernetes.io/projected/b694af35-a98e-4b41-a10e-88995b1398a1-kube-api-access-9g7g5\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.173441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-catalog-content\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.173529 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-utilities\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.173980 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-utilities\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.174471 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-catalog-content\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.194585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7g5\" (UniqueName: \"kubernetes.io/projected/b694af35-a98e-4b41-a10e-88995b1398a1-kube-api-access-9g7g5\") pod \"certified-operators-h4lmf\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.302470 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:17 crc kubenswrapper[4832]: I0312 15:46:17.618273 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4lmf"] Mar 12 15:46:18 crc kubenswrapper[4832]: I0312 15:46:18.336994 4832 generic.go:334] "Generic (PLEG): container finished" podID="c9dc3f33-b141-4399-ab60-76a3df3083c0" containerID="3bb162359f32ab8a31e4d75fc1f5d18364474959900276a5b69783a44bcbf32c" exitCode=0 Mar 12 15:46:18 crc kubenswrapper[4832]: I0312 15:46:18.337078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" event={"ID":"c9dc3f33-b141-4399-ab60-76a3df3083c0","Type":"ContainerDied","Data":"3bb162359f32ab8a31e4d75fc1f5d18364474959900276a5b69783a44bcbf32c"} Mar 12 15:46:18 crc kubenswrapper[4832]: I0312 15:46:18.339418 4832 generic.go:334] "Generic (PLEG): container finished" podID="b694af35-a98e-4b41-a10e-88995b1398a1" containerID="e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb" exitCode=0 Mar 12 15:46:18 crc kubenswrapper[4832]: I0312 15:46:18.339485 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4lmf" event={"ID":"b694af35-a98e-4b41-a10e-88995b1398a1","Type":"ContainerDied","Data":"e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb"} Mar 12 15:46:18 crc kubenswrapper[4832]: I0312 15:46:18.339544 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4lmf" event={"ID":"b694af35-a98e-4b41-a10e-88995b1398a1","Type":"ContainerStarted","Data":"c8d2b966b9b7450469e4c43df744b9dac14c2692ff8f35981768926b15deafb0"} Mar 12 15:46:18 crc kubenswrapper[4832]: I0312 15:46:18.342264 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnbjm" event={"ID":"e9a17575-0ffe-4286-bc78-7b4370a06d43","Type":"ContainerStarted","Data":"2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5"} Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.350828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4lmf" event={"ID":"b694af35-a98e-4b41-a10e-88995b1398a1","Type":"ContainerStarted","Data":"c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d"} Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.503658 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.536695 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-g94wx"] Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.544375 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-g94wx"] Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.619262 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xfmk\" (UniqueName: \"kubernetes.io/projected/c9dc3f33-b141-4399-ab60-76a3df3083c0-kube-api-access-5xfmk\") pod \"c9dc3f33-b141-4399-ab60-76a3df3083c0\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.619383 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9dc3f33-b141-4399-ab60-76a3df3083c0-host\") pod \"c9dc3f33-b141-4399-ab60-76a3df3083c0\" (UID: \"c9dc3f33-b141-4399-ab60-76a3df3083c0\") " Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.619558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9dc3f33-b141-4399-ab60-76a3df3083c0-host" (OuterVolumeSpecName: "host") pod "c9dc3f33-b141-4399-ab60-76a3df3083c0" (UID: "c9dc3f33-b141-4399-ab60-76a3df3083c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.620093 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9dc3f33-b141-4399-ab60-76a3df3083c0-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.624548 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9dc3f33-b141-4399-ab60-76a3df3083c0-kube-api-access-5xfmk" (OuterVolumeSpecName: "kube-api-access-5xfmk") pod "c9dc3f33-b141-4399-ab60-76a3df3083c0" (UID: "c9dc3f33-b141-4399-ab60-76a3df3083c0"). InnerVolumeSpecName "kube-api-access-5xfmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:46:19 crc kubenswrapper[4832]: I0312 15:46:19.722467 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xfmk\" (UniqueName: \"kubernetes.io/projected/c9dc3f33-b141-4399-ab60-76a3df3083c0-kube-api-access-5xfmk\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.364343 4832 generic.go:334] "Generic (PLEG): container finished" podID="b694af35-a98e-4b41-a10e-88995b1398a1" containerID="c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d" exitCode=0 Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.364543 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4lmf" event={"ID":"b694af35-a98e-4b41-a10e-88995b1398a1","Type":"ContainerDied","Data":"c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d"} Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.369533 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dd14b530558b3192fc2cbd8eb6e0744dda627ae8e19071c444fabe0ebc48611" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.369646 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-g94wx" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.631113 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9dc3f33-b141-4399-ab60-76a3df3083c0" path="/var/lib/kubelet/pods/c9dc3f33-b141-4399-ab60-76a3df3083c0/volumes" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.736639 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-xq4nt"] Mar 12 15:46:20 crc kubenswrapper[4832]: E0312 15:46:20.737058 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9dc3f33-b141-4399-ab60-76a3df3083c0" containerName="container-00" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.737077 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9dc3f33-b141-4399-ab60-76a3df3083c0" containerName="container-00" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.737255 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9dc3f33-b141-4399-ab60-76a3df3083c0" containerName="container-00" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.737916 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.740172 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rqgtd"/"default-dockercfg-zcgnj" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.842374 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjt9\" (UniqueName: \"kubernetes.io/projected/57eaff83-583b-4a27-882b-d16733a6e4d6-kube-api-access-zsjt9\") pod \"crc-debug-xq4nt\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.842714 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57eaff83-583b-4a27-882b-d16733a6e4d6-host\") pod \"crc-debug-xq4nt\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.944341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjt9\" (UniqueName: \"kubernetes.io/projected/57eaff83-583b-4a27-882b-d16733a6e4d6-kube-api-access-zsjt9\") pod \"crc-debug-xq4nt\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.944426 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57eaff83-583b-4a27-882b-d16733a6e4d6-host\") pod \"crc-debug-xq4nt\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.944566 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57eaff83-583b-4a27-882b-d16733a6e4d6-host\") pod \"crc-debug-xq4nt\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:20 crc kubenswrapper[4832]: I0312 15:46:20.963531 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjt9\" (UniqueName: \"kubernetes.io/projected/57eaff83-583b-4a27-882b-d16733a6e4d6-kube-api-access-zsjt9\") pod \"crc-debug-xq4nt\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:21 crc kubenswrapper[4832]: I0312 15:46:21.056965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:21 crc kubenswrapper[4832]: I0312 15:46:21.379428 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4lmf" event={"ID":"b694af35-a98e-4b41-a10e-88995b1398a1","Type":"ContainerStarted","Data":"41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda"} Mar 12 15:46:21 crc kubenswrapper[4832]: I0312 15:46:21.382212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" event={"ID":"57eaff83-583b-4a27-882b-d16733a6e4d6","Type":"ContainerStarted","Data":"d1e917960cb176dcb6a01bfe6be0980dac355a5b4497eddf598573211dd72c85"} Mar 12 15:46:21 crc kubenswrapper[4832]: I0312 15:46:21.382248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" event={"ID":"57eaff83-583b-4a27-882b-d16733a6e4d6","Type":"ContainerStarted","Data":"6c1f44bafe83e901b861d13c1062eca399d03ba66fb8fd214eb7c3357cd74ae9"} Mar 12 15:46:21 crc kubenswrapper[4832]: I0312 15:46:21.399666 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4lmf" podStartSLOduration=2.879987256 podStartE2EDuration="5.399650832s" podCreationTimestamp="2026-03-12 15:46:16 +0000 UTC" firstStartedPulling="2026-03-12 15:46:18.341413625 +0000 UTC m=+3536.985427851" lastFinishedPulling="2026-03-12 15:46:20.861077191 +0000 UTC m=+3539.505091427" observedRunningTime="2026-03-12 15:46:21.397626594 +0000 UTC m=+3540.041640820" watchObservedRunningTime="2026-03-12 15:46:21.399650832 +0000 UTC m=+3540.043665058" Mar 12 15:46:21 crc kubenswrapper[4832]: I0312 15:46:21.419403 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" podStartSLOduration=1.419386005 podStartE2EDuration="1.419386005s" podCreationTimestamp="2026-03-12 15:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:46:21.412797387 +0000 UTC m=+3540.056811623" watchObservedRunningTime="2026-03-12 15:46:21.419386005 +0000 UTC m=+3540.063400221" Mar 12 15:46:22 crc kubenswrapper[4832]: I0312 15:46:22.393122 4832 generic.go:334] "Generic (PLEG): container finished" podID="57eaff83-583b-4a27-882b-d16733a6e4d6" containerID="d1e917960cb176dcb6a01bfe6be0980dac355a5b4497eddf598573211dd72c85" exitCode=0 Mar 12 15:46:22 crc kubenswrapper[4832]: I0312 15:46:22.393158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" event={"ID":"57eaff83-583b-4a27-882b-d16733a6e4d6","Type":"ContainerDied","Data":"d1e917960cb176dcb6a01bfe6be0980dac355a5b4497eddf598573211dd72c85"} Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.408052 4832 generic.go:334] "Generic (PLEG): container finished" podID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerID="2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5" exitCode=0 Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.408225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnbjm" event={"ID":"e9a17575-0ffe-4286-bc78-7b4370a06d43","Type":"ContainerDied","Data":"2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5"} Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.527170 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.569058 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-xq4nt"] Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.580164 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-xq4nt"] Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.693669 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57eaff83-583b-4a27-882b-d16733a6e4d6-host\") pod \"57eaff83-583b-4a27-882b-d16733a6e4d6\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.693816 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57eaff83-583b-4a27-882b-d16733a6e4d6-host" (OuterVolumeSpecName: "host") pod "57eaff83-583b-4a27-882b-d16733a6e4d6" (UID: "57eaff83-583b-4a27-882b-d16733a6e4d6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.693866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsjt9\" (UniqueName: \"kubernetes.io/projected/57eaff83-583b-4a27-882b-d16733a6e4d6-kube-api-access-zsjt9\") pod \"57eaff83-583b-4a27-882b-d16733a6e4d6\" (UID: \"57eaff83-583b-4a27-882b-d16733a6e4d6\") " Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.694590 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57eaff83-583b-4a27-882b-d16733a6e4d6-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.704932 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57eaff83-583b-4a27-882b-d16733a6e4d6-kube-api-access-zsjt9" (OuterVolumeSpecName: "kube-api-access-zsjt9") pod "57eaff83-583b-4a27-882b-d16733a6e4d6" (UID: "57eaff83-583b-4a27-882b-d16733a6e4d6"). InnerVolumeSpecName "kube-api-access-zsjt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:46:23 crc kubenswrapper[4832]: I0312 15:46:23.797372 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsjt9\" (UniqueName: \"kubernetes.io/projected/57eaff83-583b-4a27-882b-d16733a6e4d6-kube-api-access-zsjt9\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.420066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnbjm" event={"ID":"e9a17575-0ffe-4286-bc78-7b4370a06d43","Type":"ContainerStarted","Data":"6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7"} Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.422056 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c1f44bafe83e901b861d13c1062eca399d03ba66fb8fd214eb7c3357cd74ae9" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.422075 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-xq4nt" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.485212 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xnbjm" podStartSLOduration=2.824755944 podStartE2EDuration="10.485192178s" podCreationTimestamp="2026-03-12 15:46:14 +0000 UTC" firstStartedPulling="2026-03-12 15:46:16.322246362 +0000 UTC m=+3534.966260608" lastFinishedPulling="2026-03-12 15:46:23.982682606 +0000 UTC m=+3542.626696842" observedRunningTime="2026-03-12 15:46:24.477290132 +0000 UTC m=+3543.121304358" watchObservedRunningTime="2026-03-12 15:46:24.485192178 +0000 UTC m=+3543.129206414" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.633762 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57eaff83-583b-4a27-882b-d16733a6e4d6" path="/var/lib/kubelet/pods/57eaff83-583b-4a27-882b-d16733a6e4d6/volumes" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.728841 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-5zfxv"] Mar 12 15:46:24 crc kubenswrapper[4832]: E0312 15:46:24.729363 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57eaff83-583b-4a27-882b-d16733a6e4d6" containerName="container-00" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.729386 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57eaff83-583b-4a27-882b-d16733a6e4d6" containerName="container-00" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.729711 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57eaff83-583b-4a27-882b-d16733a6e4d6" containerName="container-00" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.730424 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.732283 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rqgtd"/"default-dockercfg-zcgnj" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.821816 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-host\") pod \"crc-debug-5zfxv\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.821891 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w94q\" (UniqueName: \"kubernetes.io/projected/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-kube-api-access-4w94q\") pod \"crc-debug-5zfxv\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.923428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-host\") pod \"crc-debug-5zfxv\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.923515 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w94q\" (UniqueName: \"kubernetes.io/projected/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-kube-api-access-4w94q\") pod \"crc-debug-5zfxv\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.923544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-host\") pod \"crc-debug-5zfxv\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:24 crc kubenswrapper[4832]: I0312 15:46:24.948054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w94q\" (UniqueName: \"kubernetes.io/projected/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-kube-api-access-4w94q\") pod \"crc-debug-5zfxv\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.051191 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.051271 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.052998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.434465 4832 generic.go:334] "Generic (PLEG): container finished" podID="5a38e2b7-5b5d-4a79-a0a8-9ffac2682636" containerID="09462f6b5ccb75500473f87f4a1ed65da1eaf49bdd446ef2da1fda73293ca34d" exitCode=0 Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.434556 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" event={"ID":"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636","Type":"ContainerDied","Data":"09462f6b5ccb75500473f87f4a1ed65da1eaf49bdd446ef2da1fda73293ca34d"} Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.435587 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" event={"ID":"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636","Type":"ContainerStarted","Data":"73008c0ea1a91f8b58c5b05b6a452e93bc4648066f18430be56266ad63f2ee81"} Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.473675 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-5zfxv"] Mar 12 15:46:25 crc kubenswrapper[4832]: I0312 15:46:25.484582 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rqgtd/crc-debug-5zfxv"] Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.112580 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xnbjm" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="registry-server" probeResult="failure" output=< Mar 12 15:46:26 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Mar 12 15:46:26 crc kubenswrapper[4832]: > Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.539675 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.657538 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-host\") pod \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.657712 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-host" (OuterVolumeSpecName: "host") pod "5a38e2b7-5b5d-4a79-a0a8-9ffac2682636" (UID: "5a38e2b7-5b5d-4a79-a0a8-9ffac2682636"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.657911 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w94q\" (UniqueName: \"kubernetes.io/projected/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-kube-api-access-4w94q\") pod \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\" (UID: \"5a38e2b7-5b5d-4a79-a0a8-9ffac2682636\") " Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.658864 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.667723 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-kube-api-access-4w94q" (OuterVolumeSpecName: "kube-api-access-4w94q") pod "5a38e2b7-5b5d-4a79-a0a8-9ffac2682636" (UID: "5a38e2b7-5b5d-4a79-a0a8-9ffac2682636"). InnerVolumeSpecName "kube-api-access-4w94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:46:26 crc kubenswrapper[4832]: I0312 15:46:26.761552 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w94q\" (UniqueName: \"kubernetes.io/projected/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636-kube-api-access-4w94q\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:27 crc kubenswrapper[4832]: I0312 15:46:27.303110 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:27 crc kubenswrapper[4832]: I0312 15:46:27.304982 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:27 crc kubenswrapper[4832]: I0312 15:46:27.451335 4832 scope.go:117] "RemoveContainer" containerID="09462f6b5ccb75500473f87f4a1ed65da1eaf49bdd446ef2da1fda73293ca34d" Mar 12 15:46:27 crc kubenswrapper[4832]: I0312 15:46:27.451471 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/crc-debug-5zfxv" Mar 12 15:46:28 crc kubenswrapper[4832]: I0312 15:46:28.376145 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-h4lmf" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="registry-server" probeResult="failure" output=< Mar 12 15:46:28 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Mar 12 15:46:28 crc kubenswrapper[4832]: > Mar 12 15:46:28 crc kubenswrapper[4832]: I0312 15:46:28.632399 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a38e2b7-5b5d-4a79-a0a8-9ffac2682636" path="/var/lib/kubelet/pods/5a38e2b7-5b5d-4a79-a0a8-9ffac2682636/volumes" Mar 12 15:46:35 crc kubenswrapper[4832]: I0312 15:46:35.096731 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:35 crc kubenswrapper[4832]: I0312 15:46:35.141926 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:35 crc kubenswrapper[4832]: I0312 15:46:35.704540 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xnbjm"] Mar 12 15:46:36 crc kubenswrapper[4832]: I0312 15:46:36.524373 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xnbjm" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="registry-server" containerID="cri-o://6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7" gracePeriod=2 Mar 12 15:46:36 crc kubenswrapper[4832]: E0312 15:46:36.719511 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a17575_0ffe_4286_bc78_7b4370a06d43.slice/crio-6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a17575_0ffe_4286_bc78_7b4370a06d43.slice/crio-conmon-6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.004595 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.184882 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-utilities\") pod \"e9a17575-0ffe-4286-bc78-7b4370a06d43\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.184937 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-catalog-content\") pod \"e9a17575-0ffe-4286-bc78-7b4370a06d43\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.185038 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6cdd\" (UniqueName: \"kubernetes.io/projected/e9a17575-0ffe-4286-bc78-7b4370a06d43-kube-api-access-w6cdd\") pod \"e9a17575-0ffe-4286-bc78-7b4370a06d43\" (UID: \"e9a17575-0ffe-4286-bc78-7b4370a06d43\") " Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.186009 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-utilities" (OuterVolumeSpecName: "utilities") pod "e9a17575-0ffe-4286-bc78-7b4370a06d43" (UID: "e9a17575-0ffe-4286-bc78-7b4370a06d43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.204718 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a17575-0ffe-4286-bc78-7b4370a06d43-kube-api-access-w6cdd" (OuterVolumeSpecName: "kube-api-access-w6cdd") pod "e9a17575-0ffe-4286-bc78-7b4370a06d43" (UID: "e9a17575-0ffe-4286-bc78-7b4370a06d43"). InnerVolumeSpecName "kube-api-access-w6cdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.287707 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.288011 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6cdd\" (UniqueName: \"kubernetes.io/projected/e9a17575-0ffe-4286-bc78-7b4370a06d43-kube-api-access-w6cdd\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.327191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9a17575-0ffe-4286-bc78-7b4370a06d43" (UID: "e9a17575-0ffe-4286-bc78-7b4370a06d43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.356594 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.390259 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a17575-0ffe-4286-bc78-7b4370a06d43-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.402992 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.532934 4832 generic.go:334] "Generic (PLEG): container finished" podID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerID="6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7" exitCode=0 Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.533905 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnbjm" event={"ID":"e9a17575-0ffe-4286-bc78-7b4370a06d43","Type":"ContainerDied","Data":"6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7"} Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.533963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnbjm" event={"ID":"e9a17575-0ffe-4286-bc78-7b4370a06d43","Type":"ContainerDied","Data":"6f2727cb6adcccffa485f8f8fe48bb811f42d7a1a32fcd2418bbf6ebf6617436"} Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.533986 4832 scope.go:117] "RemoveContainer" containerID="6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.534385 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnbjm" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.559134 4832 scope.go:117] "RemoveContainer" containerID="2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.574328 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xnbjm"] Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.585827 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xnbjm"] Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.599552 4832 scope.go:117] "RemoveContainer" containerID="1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.656318 4832 scope.go:117] "RemoveContainer" containerID="6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7" Mar 12 15:46:37 crc kubenswrapper[4832]: E0312 15:46:37.656927 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7\": container with ID starting with 6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7 not found: ID does not exist" containerID="6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.656956 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7"} err="failed to get container status \"6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7\": rpc error: code = NotFound desc = could not find container \"6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7\": container with ID starting with 6bf62af46ac912caa7e4a27344da07070874692237a3f8e1eb00e57de71ffcb7 not found: ID does not exist" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.656979 4832 scope.go:117] "RemoveContainer" containerID="2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5" Mar 12 15:46:37 crc kubenswrapper[4832]: E0312 15:46:37.657442 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5\": container with ID starting with 2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5 not found: ID does not exist" containerID="2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.657546 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5"} err="failed to get container status \"2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5\": rpc error: code = NotFound desc = could not find container \"2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5\": container with ID starting with 2c12d73b23df75a8e0c10972683733b84625a06fc8129c9e00285e7d288024b5 not found: ID does not exist" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.657610 4832 scope.go:117] "RemoveContainer" containerID="1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d" Mar 12 15:46:37 crc kubenswrapper[4832]: E0312 15:46:37.658000 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d\": container with ID starting with 1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d not found: ID does not exist" containerID="1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d" Mar 12 15:46:37 crc kubenswrapper[4832]: I0312 15:46:37.658021 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d"} err="failed to get container status \"1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d\": rpc error: code = NotFound desc = could not find container \"1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d\": container with ID starting with 1f4c1bf94b295a3cfd5a844984f43ac5466771057ee6b6ce982e940080b2f10d not found: ID does not exist" Mar 12 15:46:38 crc kubenswrapper[4832]: I0312 15:46:38.640033 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" path="/var/lib/kubelet/pods/e9a17575-0ffe-4286-bc78-7b4370a06d43/volumes" Mar 12 15:46:39 crc kubenswrapper[4832]: I0312 15:46:39.713397 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4lmf"] Mar 12 15:46:39 crc kubenswrapper[4832]: I0312 15:46:39.714106 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4lmf" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="registry-server" containerID="cri-o://41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda" gracePeriod=2 Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.221191 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.362163 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-catalog-content\") pod \"b694af35-a98e-4b41-a10e-88995b1398a1\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.362422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-utilities\") pod \"b694af35-a98e-4b41-a10e-88995b1398a1\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.362681 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g7g5\" (UniqueName: \"kubernetes.io/projected/b694af35-a98e-4b41-a10e-88995b1398a1-kube-api-access-9g7g5\") pod \"b694af35-a98e-4b41-a10e-88995b1398a1\" (UID: \"b694af35-a98e-4b41-a10e-88995b1398a1\") " Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.363389 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-utilities" (OuterVolumeSpecName: "utilities") pod "b694af35-a98e-4b41-a10e-88995b1398a1" (UID: "b694af35-a98e-4b41-a10e-88995b1398a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.372947 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b694af35-a98e-4b41-a10e-88995b1398a1-kube-api-access-9g7g5" (OuterVolumeSpecName: "kube-api-access-9g7g5") pod "b694af35-a98e-4b41-a10e-88995b1398a1" (UID: "b694af35-a98e-4b41-a10e-88995b1398a1"). InnerVolumeSpecName "kube-api-access-9g7g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.438691 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b694af35-a98e-4b41-a10e-88995b1398a1" (UID: "b694af35-a98e-4b41-a10e-88995b1398a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.467642 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g7g5\" (UniqueName: \"kubernetes.io/projected/b694af35-a98e-4b41-a10e-88995b1398a1-kube-api-access-9g7g5\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.467678 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.467688 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b694af35-a98e-4b41-a10e-88995b1398a1-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.566533 4832 generic.go:334] "Generic (PLEG): container finished" podID="b694af35-a98e-4b41-a10e-88995b1398a1" containerID="41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda" exitCode=0 Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.566696 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4lmf" event={"ID":"b694af35-a98e-4b41-a10e-88995b1398a1","Type":"ContainerDied","Data":"41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda"} Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.567023 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4lmf" event={"ID":"b694af35-a98e-4b41-a10e-88995b1398a1","Type":"ContainerDied","Data":"c8d2b966b9b7450469e4c43df744b9dac14c2692ff8f35981768926b15deafb0"} Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.567067 4832 scope.go:117] "RemoveContainer" containerID="41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.566777 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4lmf" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.595296 4832 scope.go:117] "RemoveContainer" containerID="c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.619966 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4lmf"] Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.620135 4832 scope.go:117] "RemoveContainer" containerID="e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.630921 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4lmf"] Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.675796 4832 scope.go:117] "RemoveContainer" containerID="41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda" Mar 12 15:46:40 crc kubenswrapper[4832]: E0312 15:46:40.676956 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda\": container with ID starting with 41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda not found: ID does not exist" containerID="41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.677012 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda"} err="failed to get container status \"41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda\": rpc error: code = NotFound desc = could not find container \"41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda\": container with ID starting with 41cde9b30c1f2090484bdc628fedd9bf951f71f30066ddf8fbebd6e643d0acda not found: ID does not exist" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.677046 4832 scope.go:117] "RemoveContainer" containerID="c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d" Mar 12 15:46:40 crc kubenswrapper[4832]: E0312 15:46:40.677446 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d\": container with ID starting with c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d not found: ID does not exist" containerID="c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.677478 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d"} err="failed to get container status \"c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d\": rpc error: code = NotFound desc = could not find container \"c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d\": container with ID starting with c0b00ec973d203cdb924b906414dcef38af005c6ccb560e7d498b05bda879d9d not found: ID does not exist" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.677496 4832 scope.go:117] "RemoveContainer" containerID="e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb" Mar 12 15:46:40 crc kubenswrapper[4832]: E0312 15:46:40.677850 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb\": container with ID starting with e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb not found: ID does not exist" containerID="e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb" Mar 12 15:46:40 crc kubenswrapper[4832]: I0312 15:46:40.677883 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb"} err="failed to get container status \"e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb\": rpc error: code = NotFound desc = could not find container \"e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb\": container with ID starting with e46730f7c2e8560f73250e26cc8d05e340d14efb6eaf7c5884696b738f0a28bb not found: ID does not exist" Mar 12 15:46:41 crc kubenswrapper[4832]: I0312 15:46:41.580539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76485774fd-8dtp8_d751c81a-b91d-4849-a382-81b234d4c6c8/barbican-api/0.log" Mar 12 15:46:41 crc kubenswrapper[4832]: I0312 15:46:41.728457 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76485774fd-8dtp8_d751c81a-b91d-4849-a382-81b234d4c6c8/barbican-api-log/0.log" Mar 12 15:46:41 crc kubenswrapper[4832]: I0312 15:46:41.755015 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5586cfd7d8-lcjh2_4f29ee66-6d6f-4940-9283-7bd2bff068b6/barbican-keystone-listener/0.log" Mar 12 15:46:41 crc kubenswrapper[4832]: I0312 15:46:41.798954 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5586cfd7d8-lcjh2_4f29ee66-6d6f-4940-9283-7bd2bff068b6/barbican-keystone-listener-log/0.log" Mar 12 15:46:41 crc kubenswrapper[4832]: I0312 15:46:41.944570 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5456b889d5-mb698_44661885-c36b-4450-b181-4bfa5f442420/barbican-worker/0.log" Mar 12 15:46:41 crc kubenswrapper[4832]: I0312 15:46:41.985848 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5456b889d5-mb698_44661885-c36b-4450-b181-4bfa5f442420/barbican-worker-log/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.168877 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-45v4j_4787eb10-18fc-4da2-98ce-246687619641/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.222738 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e/ceilometer-central-agent/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.289824 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e/ceilometer-notification-agent/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.360644 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e/proxy-httpd/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.395885 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b0a33bd-1b0b-45b6-8937-d7b9047a2a2e/sg-core/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.582347 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e85b97b4-a179-4d8b-bb70-86bc2ae08d70/cinder-api/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.596450 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e85b97b4-a179-4d8b-bb70-86bc2ae08d70/cinder-api-log/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.629764 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" path="/var/lib/kubelet/pods/b694af35-a98e-4b41-a10e-88995b1398a1/volumes" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.688494 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eca6fd5d-1b83-4ff3-8216-41f81b29555f/cinder-scheduler/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.815739 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eca6fd5d-1b83-4ff3-8216-41f81b29555f/probe/0.log" Mar 12 15:46:42 crc kubenswrapper[4832]: I0312 15:46:42.860526 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-drmzg_0a779f8a-9311-43e6-add6-68e19f39aadd/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.009067 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fm8zv_27b4a258-8985-4bec-a0a5-d024cd4e9f55/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.074831 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-478qp_79812a9d-e99f-418d-98c0-c9005079c950/init/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.254199 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-478qp_79812a9d-e99f-418d-98c0-c9005079c950/init/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.312348 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-478qp_79812a9d-e99f-418d-98c0-c9005079c950/dnsmasq-dns/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.341216 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9czwh_aba57da9-d394-41df-a7ea-23344bad0e60/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.506487 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0aa341ea-d6ca-4afd-b425-5197402c2ff8/glance-httpd/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.528812 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0aa341ea-d6ca-4afd-b425-5197402c2ff8/glance-log/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.701303 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_80295e6a-6a0d-4cb0-868d-684a2631b1eb/glance-httpd/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.721527 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_80295e6a-6a0d-4cb0-868d-684a2631b1eb/glance-log/0.log" Mar 12 15:46:43 crc kubenswrapper[4832]: I0312 15:46:43.886052 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7c5974b5d4-dhhm8_06633b31-01e2-4a1c-bf9e-e74b157fba1d/horizon/0.log" Mar 12 15:46:44 crc kubenswrapper[4832]: I0312 15:46:44.009062 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-c8bgl_298f7208-9759-4481-973f-2cd1da3c5d64/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:44 crc kubenswrapper[4832]: I0312 15:46:44.227107 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ppl8h_5bc18e0a-c12a-49bc-bcb2-335ac6922bc9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:44 crc kubenswrapper[4832]: I0312 15:46:44.261729 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7c5974b5d4-dhhm8_06633b31-01e2-4a1c-bf9e-e74b157fba1d/horizon-log/0.log" Mar 12 15:46:44 crc kubenswrapper[4832]: I0312 15:46:44.540934 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_64122d3d-d2ec-49ad-a01e-1497d5889af6/kube-state-metrics/0.log" Mar 12 15:46:44 crc kubenswrapper[4832]: I0312 15:46:44.544327 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-84975bc55b-p4rz5_606a2cdd-10ea-4e32-876c-b2149a2aa921/keystone-api/0.log" Mar 12 15:46:44 crc kubenswrapper[4832]: I0312 15:46:44.736836 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wmghq_a043f4e6-f64c-4a26-ac04-93005bcc77d0/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:45 crc kubenswrapper[4832]: I0312 15:46:45.046409 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c8f87d6b5-dbffr_fd3d8dc5-b55e-4c33-a15b-77741921f451/neutron-api/0.log" Mar 12 15:46:45 crc kubenswrapper[4832]: I0312 15:46:45.067094 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c8f87d6b5-dbffr_fd3d8dc5-b55e-4c33-a15b-77741921f451/neutron-httpd/0.log" Mar 12 15:46:45 crc kubenswrapper[4832]: I0312 15:46:45.236477 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbnjk_75cf2468-905f-4551-ba33-4c055f2ac4ce/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:45 crc kubenswrapper[4832]: I0312 15:46:45.718592 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d4d9828b-3b93-4cb1-a0bc-794a23c11f07/nova-cell0-conductor-conductor/0.log" Mar 12 15:46:45 crc kubenswrapper[4832]: I0312 15:46:45.751738 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3cbc1286-469f-4849-bb8a-4452af8d43d7/nova-api-log/0.log" Mar 12 15:46:45 crc kubenswrapper[4832]: I0312 15:46:45.842251 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3cbc1286-469f-4849-bb8a-4452af8d43d7/nova-api-api/0.log" Mar 12 15:46:46 crc kubenswrapper[4832]: I0312 15:46:46.042061 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d3cc4ebc-99b9-474c-aef6-c527ce1ed24e/nova-cell1-conductor-conductor/0.log" Mar 12 15:46:46 crc kubenswrapper[4832]: I0312 15:46:46.089590 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ce2dd00e-eeda-4844-a1d6-64391351b678/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 15:46:46 crc kubenswrapper[4832]: I0312 15:46:46.199308 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-z664n_46a3ec68-bc9e-4758-ab38-6d6b776ad178/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:46 crc kubenswrapper[4832]: I0312 15:46:46.366129 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_864eb0ae-dd5f-438f-81a0-e48bf297eecb/nova-metadata-log/0.log" Mar 12 15:46:46 crc kubenswrapper[4832]: I0312 15:46:46.842627 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_07e4e284-b647-4f24-915d-b50315c0fb5e/mysql-bootstrap/0.log" Mar 12 15:46:46 crc kubenswrapper[4832]: I0312 15:46:46.884567 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_5435c879-37ba-4fb2-bfb5-a7ccbf3d474c/nova-scheduler-scheduler/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.051412 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_07e4e284-b647-4f24-915d-b50315c0fb5e/galera/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.059317 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_07e4e284-b647-4f24-915d-b50315c0fb5e/mysql-bootstrap/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.230210 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_45c3252f-6cf6-49c3-b42b-f692310a0e91/mysql-bootstrap/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.430183 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_45c3252f-6cf6-49c3-b42b-f692310a0e91/mysql-bootstrap/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.459943 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_45c3252f-6cf6-49c3-b42b-f692310a0e91/galera/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.507045 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_864eb0ae-dd5f-438f-81a0-e48bf297eecb/nova-metadata-metadata/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.636733 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0988fee7-998d-4cf1-9740-9ccbdc012168/openstackclient/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.722143 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6x8t6_9d4200a6-7cc2-4b4a-b01e-290567a2ec8c/ovn-controller/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.860875 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-455ln_2ca5bacc-89cf-4734-a055-a1725ccd05e5/openstack-network-exporter/0.log" Mar 12 15:46:47 crc kubenswrapper[4832]: I0312 15:46:47.930997 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kmmfr_d50077d7-6691-4664-89cc-3be14f2e8313/ovsdb-server-init/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.177682 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kmmfr_d50077d7-6691-4664-89cc-3be14f2e8313/ovs-vswitchd/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.236333 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kmmfr_d50077d7-6691-4664-89cc-3be14f2e8313/ovsdb-server/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.248080 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kmmfr_d50077d7-6691-4664-89cc-3be14f2e8313/ovsdb-server-init/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.409858 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8bfwd_f20eb1c2-5228-44bb-a4c8-f6bb88a8fd89/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.434873 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_07b52275-cab8-4095-ac58-8842d81e39fd/openstack-network-exporter/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.498801 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_07b52275-cab8-4095-ac58-8842d81e39fd/ovn-northd/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.667045 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_60f51f8d-71a9-4409-abd0-8981bced84a2/ovsdbserver-nb/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.668433 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_60f51f8d-71a9-4409-abd0-8981bced84a2/openstack-network-exporter/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.857173 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d73c1039-b9bc-4861-87d3-22457aecb575/openstack-network-exporter/0.log" Mar 12 15:46:48 crc kubenswrapper[4832]: I0312 15:46:48.872112 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d73c1039-b9bc-4861-87d3-22457aecb575/ovsdbserver-sb/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.047305 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58b9f48778-gcmpc_6070e7f1-ea29-422e-9574-77b87a8a9c3b/placement-api/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.144252 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58b9f48778-gcmpc_6070e7f1-ea29-422e-9574-77b87a8a9c3b/placement-log/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.173833 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c/setup-container/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.372707 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c/setup-container/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.389227 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3/setup-container/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.410385 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dc1c8be9-f6ee-42fb-86ed-b1e6d6e9013c/rabbitmq/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.638529 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3/setup-container/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.691180 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b59b4c2e-d6b5-401b-b7a2-0faf2920fcb3/rabbitmq/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.742780 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ctg82_1a5acff7-1fff-414e-9ad1-b4b8116f73d4/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.870522 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-v7m7c_96ffa11d-c5f1-4b32-b2cc-6bc830cf4662/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:49 crc kubenswrapper[4832]: I0312 15:46:49.944634 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-m622w_9a6fe906-9add-49ec-ad85-4f7ba8034f73/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.200335 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bn6rd_9b1bf168-48a0-44f0-a01a-ada8aa0fbb24/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.207602 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wscmn_92375ce9-aeef-48ed-885d-7b648497c2b5/ssh-known-hosts-edpm-deployment/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.440762 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b9648779f-rg6wj_969337ac-7543-4b59-820e-61408d5af0c3/proxy-server/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.519539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b9648779f-rg6wj_969337ac-7543-4b59-820e-61408d5af0c3/proxy-httpd/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.595789 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dbzrr_02632af9-ed7f-4481-865f-698db662e6fe/swift-ring-rebalance/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.663743 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/account-auditor/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.731875 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/account-reaper/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.863170 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/account-server/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.865603 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/account-replicator/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.913452 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/container-auditor/0.log" Mar 12 15:46:50 crc kubenswrapper[4832]: I0312 15:46:50.972155 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/container-replicator/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.094931 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/container-server/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.123939 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/container-updater/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.176366 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/object-auditor/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.198898 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/object-expirer/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.312781 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/object-replicator/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.371580 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/object-server/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.384344 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/rsync/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.415495 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/object-updater/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.530994 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2fcebd5-a8cd-4290-9055-e0a7bbec2854/swift-recon-cron/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.667184 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-ft68d_bcaa1050-d9e8-4ff1-bec4-b408bd4a1a2e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.778306 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5d12cc2d-980d-4992-ac59-1d874529ad70/tempest-tests-tempest-tests-runner/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.897689 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b5c4dbfb-fd36-43ed-a327-0a01fe766188/test-operator-logs-container/0.log" Mar 12 15:46:51 crc kubenswrapper[4832]: I0312 15:46:51.983911 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8ffhp_5de93141-455a-41ad-8137-2f14127035f7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:46:52 crc kubenswrapper[4832]: I0312 15:46:52.767733 4832 scope.go:117] "RemoveContainer" containerID="71edb25c5574fe9d493fa40c1f8a096bf902c1638177a8c22c8ad66bda2f8907" Mar 12 15:46:59 crc kubenswrapper[4832]: I0312 15:46:59.639306 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0f04c5e2-4eb0-4515-aa61-006f0b34ee93/memcached/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.045861 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-sxlwm_3fd6082b-ab5f-434e-9585-2bcc34c7cba9/manager/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.328659 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r_9842c418-b1db-4b37-b49a-8e7edbf04777/util/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.478459 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r_9842c418-b1db-4b37-b49a-8e7edbf04777/util/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.483621 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r_9842c418-b1db-4b37-b49a-8e7edbf04777/pull/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.651249 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r_9842c418-b1db-4b37-b49a-8e7edbf04777/pull/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.837655 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r_9842c418-b1db-4b37-b49a-8e7edbf04777/util/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.875238 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r_9842c418-b1db-4b37-b49a-8e7edbf04777/pull/0.log" Mar 12 15:47:16 crc kubenswrapper[4832]: I0312 15:47:16.997556 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd0a5aa88ceb988578091b7e436c5632868c2113fb1281941f0396cd6bb52r_9842c418-b1db-4b37-b49a-8e7edbf04777/extract/0.log" Mar 12 15:47:17 crc kubenswrapper[4832]: I0312 15:47:17.272596 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-2vj9s_6eb7bcdf-9bfa-4f7e-890b-9b5e7ea50f8f/manager/0.log" Mar 12 15:47:17 crc kubenswrapper[4832]: I0312 15:47:17.316114 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-rnjmc_f258cf7f-c099-40f8-94be-4e0ec5252d88/manager/0.log" Mar 12 15:47:17 crc kubenswrapper[4832]: I0312 15:47:17.459239 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-2sljf_f8760bef-6ca3-412a-b8bc-49de609fe9d3/manager/0.log" Mar 12 15:47:17 crc kubenswrapper[4832]: I0312 15:47:17.598933 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-r6kt4_2a4714d0-f39b-499a-88aa-e960dad0e00b/manager/0.log" Mar 12 15:47:17 crc kubenswrapper[4832]: I0312 15:47:17.914481 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-jr5hd_e636b173-51b8-4325-a1cf-dcea5406cdee/manager/0.log" Mar 12 15:47:18 crc kubenswrapper[4832]: I0312 15:47:18.047955 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-qk7gq_999db9dc-984c-40aa-be0f-1d98b78bf44f/manager/0.log" Mar 12 15:47:18 crc kubenswrapper[4832]: I0312 15:47:18.189926 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-8nlxs_572740f5-e207-4372-ab19-2b117aa31c69/manager/0.log" Mar 12 15:47:18 crc kubenswrapper[4832]: I0312 15:47:18.295590 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-lvnqm_5b7a63b2-3a6b-41fc-b7d0-f95e07bb760b/manager/0.log" Mar 12 15:47:18 crc kubenswrapper[4832]: I0312 15:47:18.500824 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-9sw66_3272780b-2460-41f3-bc98-7ff7708bda6f/manager/0.log" Mar 12 15:47:18 crc kubenswrapper[4832]: I0312 15:47:18.701195 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-8ngt8_6fe61dcf-24b0-4c97-9639-15335615d4d4/manager/0.log" Mar 12 15:47:18 crc kubenswrapper[4832]: I0312 15:47:18.907897 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-2z4xv_78b5b9cf-6e4a-4ac8-8611-06b417453f45/manager/0.log" Mar 12 15:47:18 crc kubenswrapper[4832]: I0312 15:47:18.954962 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-4xd9n_6c25d60c-d053-4b33-9ddd-8a95f18480f7/manager/0.log" Mar 12 15:47:19 crc kubenswrapper[4832]: I0312 15:47:19.166350 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7b625q_fbce7e5d-a791-4984-94c9-3bfdc12d70b9/manager/0.log" Mar 12 15:47:19 crc kubenswrapper[4832]: I0312 15:47:19.673159 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-666b5bf768-rllmq_cdebf23f-0836-48d1-9edf-c72140fa5f77/operator/0.log" Mar 12 15:47:19 crc kubenswrapper[4832]: I0312 15:47:19.687340 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-lq2mx_470013d1-6bd6-4e73-beaf-98535ea56e43/registry-server/0.log" Mar 12 15:47:19 crc kubenswrapper[4832]: I0312 15:47:19.947483 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-qdn75_55d4a1b5-5971-426d-91dd-9a8f991552c0/manager/0.log" Mar 12 15:47:20 crc kubenswrapper[4832]: I0312 15:47:20.032251 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-6t9gx_3d58b640-14cf-4576-b441-448a87e34b04/manager/0.log" Mar 12 15:47:20 crc kubenswrapper[4832]: I0312 15:47:20.187469 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jjzmj_ddc979ca-b73c-42b1-91a9-baf0f882ccf2/operator/0.log" Mar 12 15:47:20 crc kubenswrapper[4832]: I0312 15:47:20.358595 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-c6wkm_555d3165-c8b4-4bd9-bdc9-2e988734971b/manager/0.log" Mar 12 15:47:20 crc kubenswrapper[4832]: I0312 15:47:20.548446 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-5qrl6_6b8d3e31-3f6c-4be0-b289-cd5afd6bb142/manager/0.log" Mar 12 15:47:20 crc kubenswrapper[4832]: I0312 15:47:20.588098 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-djk4r_65719325-3b5a-4c67-add5-446fbadb2951/manager/0.log" Mar 12 15:47:20 crc kubenswrapper[4832]: I0312 15:47:20.830000 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-hgppt_a24c7823-20be-4bc5-82cf-fd57d664cb8f/manager/0.log" Mar 12 15:47:21 crc kubenswrapper[4832]: I0312 15:47:21.015755 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d46bf84bd-7lz8z_e684de45-1d61-4324-8d52-801b7f2c0b52/manager/0.log" Mar 12 15:47:22 crc kubenswrapper[4832]: I0312 15:47:22.125465 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-ckrwx_1361287f-20d2-4603-acad-c6b3a79040b2/manager/0.log" Mar 12 15:47:26 crc kubenswrapper[4832]: I0312 15:47:26.313948 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:47:26 crc kubenswrapper[4832]: I0312 15:47:26.314362 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:47:39 crc kubenswrapper[4832]: I0312 15:47:39.648240 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wp6fb_a9d5c80a-fef6-4eae-a1e9-951f2d72647b/control-plane-machine-set-operator/0.log" Mar 12 15:47:39 crc kubenswrapper[4832]: I0312 15:47:39.936127 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5g46q_780e312c-4f87-40d8-b146-0bcefe9c9c89/kube-rbac-proxy/0.log" Mar 12 15:47:40 crc kubenswrapper[4832]: I0312 15:47:40.046194 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5g46q_780e312c-4f87-40d8-b146-0bcefe9c9c89/machine-api-operator/0.log" Mar 12 15:47:52 crc kubenswrapper[4832]: I0312 15:47:52.644264 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wnbqc_c1a5c18d-2238-41d7-abe8-e5b6ddba52ba/cert-manager-controller/0.log" Mar 12 15:47:52 crc kubenswrapper[4832]: I0312 15:47:52.774783 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-n6ng4_3c02e749-9e6c-43c2-8aec-e8a4be5c1664/cert-manager-cainjector/0.log" Mar 12 15:47:52 crc kubenswrapper[4832]: I0312 15:47:52.858740 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mstfv_793aea64-41ee-4933-b96d-c95f08a1b554/cert-manager-webhook/0.log" Mar 12 15:47:56 crc kubenswrapper[4832]: I0312 15:47:56.314617 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:47:56 crc kubenswrapper[4832]: I0312 15:47:56.315082 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.348833 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t8kbj"] Mar 12 15:47:57 crc kubenswrapper[4832]: E0312 15:47:57.349615 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a38e2b7-5b5d-4a79-a0a8-9ffac2682636" containerName="container-00" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.349646 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a38e2b7-5b5d-4a79-a0a8-9ffac2682636" containerName="container-00" Mar 12 15:47:57 crc kubenswrapper[4832]: E0312 15:47:57.349659 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="registry-server" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.349667 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="registry-server" Mar 12 15:47:57 crc kubenswrapper[4832]: E0312 15:47:57.349681 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="registry-server" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.349688 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="registry-server" Mar 12 15:47:57 crc kubenswrapper[4832]: E0312 15:47:57.349701 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="extract-utilities" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.349708 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="extract-utilities" Mar 12 15:47:57 crc kubenswrapper[4832]: E0312 15:47:57.349721 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="extract-content" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.349728 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="extract-content" Mar 12 15:47:57 crc kubenswrapper[4832]: E0312 15:47:57.349743 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="extract-utilities" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.349750 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="extract-utilities" Mar 12 15:47:57 crc kubenswrapper[4832]: E0312 15:47:57.349769 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="extract-content" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.349776 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="extract-content" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.350021 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b694af35-a98e-4b41-a10e-88995b1398a1" containerName="registry-server" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.350040 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a17575-0ffe-4286-bc78-7b4370a06d43" containerName="registry-server" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.350056 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a38e2b7-5b5d-4a79-a0a8-9ffac2682636" containerName="container-00" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.351900 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.374745 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8kbj"] Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.538000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-catalog-content\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.538105 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87n88\" (UniqueName: \"kubernetes.io/projected/a1eb9add-a6ce-4ffc-b570-c57c1c148656-kube-api-access-87n88\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.538322 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-utilities\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.640483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-catalog-content\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.640566 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87n88\" (UniqueName: \"kubernetes.io/projected/a1eb9add-a6ce-4ffc-b570-c57c1c148656-kube-api-access-87n88\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.640642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-utilities\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.640981 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-catalog-content\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.641024 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-utilities\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.659773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87n88\" (UniqueName: \"kubernetes.io/projected/a1eb9add-a6ce-4ffc-b570-c57c1c148656-kube-api-access-87n88\") pod \"redhat-marketplace-t8kbj\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:57 crc kubenswrapper[4832]: I0312 15:47:57.673194 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:47:58 crc kubenswrapper[4832]: I0312 15:47:58.149260 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8kbj"] Mar 12 15:47:58 crc kubenswrapper[4832]: I0312 15:47:58.294263 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8kbj" event={"ID":"a1eb9add-a6ce-4ffc-b570-c57c1c148656","Type":"ContainerStarted","Data":"e42c6c851e2e487683fde74c1269f2e99dcf26ef0f43bf60f8ad2514133a6aa3"} Mar 12 15:47:59 crc kubenswrapper[4832]: I0312 15:47:59.309491 4832 generic.go:334] "Generic (PLEG): container finished" podID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerID="0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f" exitCode=0 Mar 12 15:47:59 crc kubenswrapper[4832]: I0312 15:47:59.309657 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8kbj" event={"ID":"a1eb9add-a6ce-4ffc-b570-c57c1c148656","Type":"ContainerDied","Data":"0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f"} Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.152325 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555508-t678v"] Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.154044 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555508-t678v" Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.165103 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.165328 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.165692 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.205185 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555508-t678v"] Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.299751 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cw88\" (UniqueName: \"kubernetes.io/projected/fddee263-2225-4a47-8361-60b13e70e608-kube-api-access-4cw88\") pod \"auto-csr-approver-29555508-t678v\" (UID: \"fddee263-2225-4a47-8361-60b13e70e608\") " pod="openshift-infra/auto-csr-approver-29555508-t678v" Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.321214 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8kbj" event={"ID":"a1eb9add-a6ce-4ffc-b570-c57c1c148656","Type":"ContainerStarted","Data":"f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a"} Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.402242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cw88\" (UniqueName: \"kubernetes.io/projected/fddee263-2225-4a47-8361-60b13e70e608-kube-api-access-4cw88\") pod \"auto-csr-approver-29555508-t678v\" (UID: \"fddee263-2225-4a47-8361-60b13e70e608\") " pod="openshift-infra/auto-csr-approver-29555508-t678v" Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.424303 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cw88\" (UniqueName: \"kubernetes.io/projected/fddee263-2225-4a47-8361-60b13e70e608-kube-api-access-4cw88\") pod \"auto-csr-approver-29555508-t678v\" (UID: \"fddee263-2225-4a47-8361-60b13e70e608\") " pod="openshift-infra/auto-csr-approver-29555508-t678v" Mar 12 15:48:00 crc kubenswrapper[4832]: I0312 15:48:00.543623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555508-t678v" Mar 12 15:48:01 crc kubenswrapper[4832]: W0312 15:48:01.132308 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfddee263_2225_4a47_8361_60b13e70e608.slice/crio-b593e6a8853e46c329af80aab83ef49cbf7760ded5473af48cfe35fa023b25d2 WatchSource:0}: Error finding container b593e6a8853e46c329af80aab83ef49cbf7760ded5473af48cfe35fa023b25d2: Status 404 returned error can't find the container with id b593e6a8853e46c329af80aab83ef49cbf7760ded5473af48cfe35fa023b25d2 Mar 12 15:48:01 crc kubenswrapper[4832]: I0312 15:48:01.133310 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555508-t678v"] Mar 12 15:48:01 crc kubenswrapper[4832]: I0312 15:48:01.330410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555508-t678v" event={"ID":"fddee263-2225-4a47-8361-60b13e70e608","Type":"ContainerStarted","Data":"b593e6a8853e46c329af80aab83ef49cbf7760ded5473af48cfe35fa023b25d2"} Mar 12 15:48:01 crc kubenswrapper[4832]: I0312 15:48:01.332857 4832 generic.go:334] "Generic (PLEG): container finished" podID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerID="f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a" exitCode=0 Mar 12 15:48:01 crc kubenswrapper[4832]: I0312 15:48:01.332894 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8kbj" event={"ID":"a1eb9add-a6ce-4ffc-b570-c57c1c148656","Type":"ContainerDied","Data":"f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a"} Mar 12 15:48:02 crc kubenswrapper[4832]: I0312 15:48:02.345791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8kbj" event={"ID":"a1eb9add-a6ce-4ffc-b570-c57c1c148656","Type":"ContainerStarted","Data":"bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5"} Mar 12 15:48:02 crc kubenswrapper[4832]: I0312 15:48:02.369856 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t8kbj" podStartSLOduration=2.923364919 podStartE2EDuration="5.369836937s" podCreationTimestamp="2026-03-12 15:47:57 +0000 UTC" firstStartedPulling="2026-03-12 15:47:59.312534378 +0000 UTC m=+3637.956548604" lastFinishedPulling="2026-03-12 15:48:01.759006376 +0000 UTC m=+3640.403020622" observedRunningTime="2026-03-12 15:48:02.363562418 +0000 UTC m=+3641.007576644" watchObservedRunningTime="2026-03-12 15:48:02.369836937 +0000 UTC m=+3641.013851163" Mar 12 15:48:03 crc kubenswrapper[4832]: I0312 15:48:03.355348 4832 generic.go:334] "Generic (PLEG): container finished" podID="fddee263-2225-4a47-8361-60b13e70e608" containerID="f6705e21acd14eb306141f2ce128b6eded605ec39feab19120151aff176b0e7e" exitCode=0 Mar 12 15:48:03 crc kubenswrapper[4832]: I0312 15:48:03.355405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555508-t678v" event={"ID":"fddee263-2225-4a47-8361-60b13e70e608","Type":"ContainerDied","Data":"f6705e21acd14eb306141f2ce128b6eded605ec39feab19120151aff176b0e7e"} Mar 12 15:48:04 crc kubenswrapper[4832]: I0312 15:48:04.746412 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555508-t678v" Mar 12 15:48:04 crc kubenswrapper[4832]: I0312 15:48:04.792950 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cw88\" (UniqueName: \"kubernetes.io/projected/fddee263-2225-4a47-8361-60b13e70e608-kube-api-access-4cw88\") pod \"fddee263-2225-4a47-8361-60b13e70e608\" (UID: \"fddee263-2225-4a47-8361-60b13e70e608\") " Mar 12 15:48:04 crc kubenswrapper[4832]: I0312 15:48:04.812725 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddee263-2225-4a47-8361-60b13e70e608-kube-api-access-4cw88" (OuterVolumeSpecName: "kube-api-access-4cw88") pod "fddee263-2225-4a47-8361-60b13e70e608" (UID: "fddee263-2225-4a47-8361-60b13e70e608"). InnerVolumeSpecName "kube-api-access-4cw88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:48:04 crc kubenswrapper[4832]: I0312 15:48:04.895291 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cw88\" (UniqueName: \"kubernetes.io/projected/fddee263-2225-4a47-8361-60b13e70e608-kube-api-access-4cw88\") on node \"crc\" DevicePath \"\"" Mar 12 15:48:05 crc kubenswrapper[4832]: I0312 15:48:05.375353 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555508-t678v" event={"ID":"fddee263-2225-4a47-8361-60b13e70e608","Type":"ContainerDied","Data":"b593e6a8853e46c329af80aab83ef49cbf7760ded5473af48cfe35fa023b25d2"} Mar 12 15:48:05 crc kubenswrapper[4832]: I0312 15:48:05.375687 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b593e6a8853e46c329af80aab83ef49cbf7760ded5473af48cfe35fa023b25d2" Mar 12 15:48:05 crc kubenswrapper[4832]: I0312 15:48:05.375396 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555508-t678v" Mar 12 15:48:05 crc kubenswrapper[4832]: I0312 15:48:05.834209 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555502-vt2sx"] Mar 12 15:48:05 crc kubenswrapper[4832]: I0312 15:48:05.845788 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555502-vt2sx"] Mar 12 15:48:06 crc kubenswrapper[4832]: I0312 15:48:06.629927 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9797b7-dc13-4a96-a595-7a926c9881a3" path="/var/lib/kubelet/pods/de9797b7-dc13-4a96-a595-7a926c9881a3/volumes" Mar 12 15:48:06 crc kubenswrapper[4832]: I0312 15:48:06.633085 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-2s9jv_3f01f5a7-2adf-424c-9302-a8469626d969/nmstate-console-plugin/0.log" Mar 12 15:48:06 crc kubenswrapper[4832]: I0312 15:48:06.858674 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wg6lb_1c73d148-e9e6-4b12-b90e-fcdebc0c97f0/nmstate-handler/0.log" Mar 12 15:48:06 crc kubenswrapper[4832]: I0312 15:48:06.871859 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nk4bz_0d538149-e77a-4bda-8a62-ef47cfc27f04/kube-rbac-proxy/0.log" Mar 12 15:48:07 crc kubenswrapper[4832]: I0312 15:48:07.007955 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nk4bz_0d538149-e77a-4bda-8a62-ef47cfc27f04/nmstate-metrics/0.log" Mar 12 15:48:07 crc kubenswrapper[4832]: I0312 15:48:07.062877 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-qfcxs_fe09f7b6-ef26-402f-9890-d0cf00dde01b/nmstate-operator/0.log" Mar 12 15:48:07 crc kubenswrapper[4832]: I0312 15:48:07.205887 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-mhmks_d5f4d3bf-8464-490f-9874-1442a1e08a2c/nmstate-webhook/0.log" Mar 12 15:48:07 crc kubenswrapper[4832]: I0312 15:48:07.673758 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:48:07 crc kubenswrapper[4832]: I0312 15:48:07.673820 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:48:07 crc kubenswrapper[4832]: I0312 15:48:07.730828 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:48:08 crc kubenswrapper[4832]: I0312 15:48:08.465411 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:48:08 crc kubenswrapper[4832]: I0312 15:48:08.514026 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8kbj"] Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.417034 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t8kbj" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="registry-server" containerID="cri-o://bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5" gracePeriod=2 Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.845021 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.910158 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87n88\" (UniqueName: \"kubernetes.io/projected/a1eb9add-a6ce-4ffc-b570-c57c1c148656-kube-api-access-87n88\") pod \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.910271 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-utilities\") pod \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.910335 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-catalog-content\") pod \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\" (UID: \"a1eb9add-a6ce-4ffc-b570-c57c1c148656\") " Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.911120 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-utilities" (OuterVolumeSpecName: "utilities") pod "a1eb9add-a6ce-4ffc-b570-c57c1c148656" (UID: "a1eb9add-a6ce-4ffc-b570-c57c1c148656"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.916388 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1eb9add-a6ce-4ffc-b570-c57c1c148656-kube-api-access-87n88" (OuterVolumeSpecName: "kube-api-access-87n88") pod "a1eb9add-a6ce-4ffc-b570-c57c1c148656" (UID: "a1eb9add-a6ce-4ffc-b570-c57c1c148656"). InnerVolumeSpecName "kube-api-access-87n88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:48:10 crc kubenswrapper[4832]: I0312 15:48:10.945166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1eb9add-a6ce-4ffc-b570-c57c1c148656" (UID: "a1eb9add-a6ce-4ffc-b570-c57c1c148656"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.012406 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.012451 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87n88\" (UniqueName: \"kubernetes.io/projected/a1eb9add-a6ce-4ffc-b570-c57c1c148656-kube-api-access-87n88\") on node \"crc\" DevicePath \"\"" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.012466 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1eb9add-a6ce-4ffc-b570-c57c1c148656-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.428054 4832 generic.go:334] "Generic (PLEG): container finished" podID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerID="bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5" exitCode=0 Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.428102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8kbj" event={"ID":"a1eb9add-a6ce-4ffc-b570-c57c1c148656","Type":"ContainerDied","Data":"bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5"} Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.428135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8kbj" event={"ID":"a1eb9add-a6ce-4ffc-b570-c57c1c148656","Type":"ContainerDied","Data":"e42c6c851e2e487683fde74c1269f2e99dcf26ef0f43bf60f8ad2514133a6aa3"} Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.428152 4832 scope.go:117] "RemoveContainer" containerID="bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.428149 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8kbj" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.451324 4832 scope.go:117] "RemoveContainer" containerID="f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.473813 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8kbj"] Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.477680 4832 scope.go:117] "RemoveContainer" containerID="0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.479527 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8kbj"] Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.519240 4832 scope.go:117] "RemoveContainer" containerID="bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5" Mar 12 15:48:11 crc kubenswrapper[4832]: E0312 15:48:11.519731 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5\": container with ID starting with bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5 not found: ID does not exist" containerID="bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.519763 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5"} err="failed to get container status \"bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5\": rpc error: code = NotFound desc = could not find container \"bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5\": container with ID starting with bd1c02014a2786af43f775978c10bc6aba7f18babdfa7f53e9628664643c83a5 not found: ID does not exist" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.519784 4832 scope.go:117] "RemoveContainer" containerID="f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a" Mar 12 15:48:11 crc kubenswrapper[4832]: E0312 15:48:11.520269 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a\": container with ID starting with f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a not found: ID does not exist" containerID="f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.520318 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a"} err="failed to get container status \"f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a\": rpc error: code = NotFound desc = could not find container \"f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a\": container with ID starting with f73550383ba3dfa0d2f991788058b3e47d604914b53412bb4f5443c5f7088c4a not found: ID does not exist" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.520348 4832 scope.go:117] "RemoveContainer" containerID="0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f" Mar 12 15:48:11 crc kubenswrapper[4832]: E0312 15:48:11.520676 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f\": container with ID starting with 0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f not found: ID does not exist" containerID="0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f" Mar 12 15:48:11 crc kubenswrapper[4832]: I0312 15:48:11.520723 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f"} err="failed to get container status \"0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f\": rpc error: code = NotFound desc = could not find container \"0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f\": container with ID starting with 0e5d0c96655204af1cffeb221cf55960ea31fb8dbc218f62963da5f82265b01f not found: ID does not exist" Mar 12 15:48:12 crc kubenswrapper[4832]: I0312 15:48:12.631890 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" path="/var/lib/kubelet/pods/a1eb9add-a6ce-4ffc-b570-c57c1c148656/volumes" Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.314180 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.314936 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.315006 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.315965 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85"} pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.316043 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" containerID="cri-o://7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" gracePeriod=600 Mar 12 15:48:26 crc kubenswrapper[4832]: E0312 15:48:26.437360 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.561403 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" exitCode=0 Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.561450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerDied","Data":"7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85"} Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.561487 4832 scope.go:117] "RemoveContainer" containerID="9c57499912bbde1c5ad6ba1ac6bbb5cfde685e2cc87fb12523a97e99c667c390" Mar 12 15:48:26 crc kubenswrapper[4832]: I0312 15:48:26.562130 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:48:26 crc kubenswrapper[4832]: E0312 15:48:26.562445 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:48:34 crc kubenswrapper[4832]: I0312 15:48:34.593809 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-whsb9_c9adb53f-cecb-4ad0-b1ec-77504b077006/kube-rbac-proxy/0.log" Mar 12 15:48:34 crc kubenswrapper[4832]: I0312 15:48:34.692464 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-whsb9_c9adb53f-cecb-4ad0-b1ec-77504b077006/controller/0.log" Mar 12 15:48:34 crc kubenswrapper[4832]: I0312 15:48:34.813849 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-frr-files/0.log" Mar 12 15:48:34 crc kubenswrapper[4832]: I0312 15:48:34.982776 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-metrics/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.001644 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-reloader/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.021175 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-frr-files/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.039278 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-reloader/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.265791 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-frr-files/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.265943 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-metrics/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.277696 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-reloader/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.328138 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-metrics/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.630409 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-metrics/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.657945 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-reloader/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.657945 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/cp-frr-files/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.660284 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/controller/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.842338 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/kube-rbac-proxy-frr/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.852125 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/kube-rbac-proxy/0.log" Mar 12 15:48:35 crc kubenswrapper[4832]: I0312 15:48:35.868600 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/frr-metrics/0.log" Mar 12 15:48:36 crc kubenswrapper[4832]: I0312 15:48:36.037895 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/reloader/0.log" Mar 12 15:48:36 crc kubenswrapper[4832]: I0312 15:48:36.052424 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-qbvws_8b51d1d8-1bda-4c45-9a0e-c712c078112e/frr-k8s-webhook-server/0.log" Mar 12 15:48:36 crc kubenswrapper[4832]: I0312 15:48:36.331079 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-866fc7dbb5-9m9sh_e22ca826-f71d-4391-a004-03da3653e5d0/manager/0.log" Mar 12 15:48:36 crc kubenswrapper[4832]: I0312 15:48:36.490201 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79f9854454-ngt5l_62554f83-27bb-4e87-941a-f6bdff2d9f99/webhook-server/0.log" Mar 12 15:48:36 crc kubenswrapper[4832]: I0312 15:48:36.556870 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6t98f_e1c64350-6632-4181-9c9a-80eb712e2f00/kube-rbac-proxy/0.log" Mar 12 15:48:37 crc kubenswrapper[4832]: I0312 15:48:37.207264 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6t98f_e1c64350-6632-4181-9c9a-80eb712e2f00/speaker/0.log" Mar 12 15:48:37 crc kubenswrapper[4832]: I0312 15:48:37.392629 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q2scw_5315b3fe-3cfe-49e0-9965-43192f3b0f9c/frr/0.log" Mar 12 15:48:41 crc kubenswrapper[4832]: I0312 15:48:41.619485 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:48:41 crc kubenswrapper[4832]: E0312 15:48:41.620466 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:48:50 crc kubenswrapper[4832]: I0312 15:48:50.311725 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b_c89f0058-b036-4452-b358-f49f86a66fb7/util/0.log" Mar 12 15:48:50 crc kubenswrapper[4832]: I0312 15:48:50.473253 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b_c89f0058-b036-4452-b358-f49f86a66fb7/util/0.log" Mar 12 15:48:50 crc kubenswrapper[4832]: I0312 15:48:50.530744 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b_c89f0058-b036-4452-b358-f49f86a66fb7/pull/0.log" Mar 12 15:48:50 crc kubenswrapper[4832]: I0312 15:48:50.572604 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b_c89f0058-b036-4452-b358-f49f86a66fb7/pull/0.log" Mar 12 15:48:50 crc kubenswrapper[4832]: I0312 15:48:50.708909 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b_c89f0058-b036-4452-b358-f49f86a66fb7/util/0.log" Mar 12 15:48:50 crc kubenswrapper[4832]: I0312 15:48:50.709456 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b_c89f0058-b036-4452-b358-f49f86a66fb7/pull/0.log" Mar 12 15:48:50 crc kubenswrapper[4832]: I0312 15:48:50.732835 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rrh9b_c89f0058-b036-4452-b358-f49f86a66fb7/extract/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.031556 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks_eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020/util/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.182557 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks_eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020/util/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.207938 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks_eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020/pull/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.225210 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks_eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020/pull/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.388192 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks_eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020/extract/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.417138 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks_eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020/pull/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.420119 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14zcks_eb5dc3ca-05eb-421b-b71c-6fb8f7e8b020/util/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.561276 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgllk_50f84e8c-fd56-4ff8-94de-906f7ed10a0e/extract-utilities/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.698317 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgllk_50f84e8c-fd56-4ff8-94de-906f7ed10a0e/extract-utilities/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.728704 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgllk_50f84e8c-fd56-4ff8-94de-906f7ed10a0e/extract-content/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.728782 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgllk_50f84e8c-fd56-4ff8-94de-906f7ed10a0e/extract-content/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.924943 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgllk_50f84e8c-fd56-4ff8-94de-906f7ed10a0e/extract-content/0.log" Mar 12 15:48:51 crc kubenswrapper[4832]: I0312 15:48:51.949481 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgllk_50f84e8c-fd56-4ff8-94de-906f7ed10a0e/extract-utilities/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.138162 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ptct_ce400e6f-fdd2-48e7-98bf-bc68d986e829/extract-utilities/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.369424 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ptct_ce400e6f-fdd2-48e7-98bf-bc68d986e829/extract-utilities/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.459568 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ptct_ce400e6f-fdd2-48e7-98bf-bc68d986e829/extract-content/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.476162 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ptct_ce400e6f-fdd2-48e7-98bf-bc68d986e829/extract-content/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.560050 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgllk_50f84e8c-fd56-4ff8-94de-906f7ed10a0e/registry-server/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.646982 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ptct_ce400e6f-fdd2-48e7-98bf-bc68d986e829/extract-utilities/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.650399 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ptct_ce400e6f-fdd2-48e7-98bf-bc68d986e829/extract-content/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.883529 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2p9wx_1fe789bb-6979-470b-96cc-e07a65463ecb/marketplace-operator/0.log" Mar 12 15:48:52 crc kubenswrapper[4832]: I0312 15:48:52.920388 4832 scope.go:117] "RemoveContainer" containerID="252a02c259a85a809f4957d52872633ce3c55071be848e5f25674a5adafdbe77" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.029390 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ptct_ce400e6f-fdd2-48e7-98bf-bc68d986e829/registry-server/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.074688 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gd9lm_736a2adc-4c77-4a76-8fb3-a2c008cb8b6b/extract-utilities/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.208203 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gd9lm_736a2adc-4c77-4a76-8fb3-a2c008cb8b6b/extract-utilities/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.235969 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gd9lm_736a2adc-4c77-4a76-8fb3-a2c008cb8b6b/extract-content/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.264228 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gd9lm_736a2adc-4c77-4a76-8fb3-a2c008cb8b6b/extract-content/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.465539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gd9lm_736a2adc-4c77-4a76-8fb3-a2c008cb8b6b/extract-utilities/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.490552 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gd9lm_736a2adc-4c77-4a76-8fb3-a2c008cb8b6b/extract-content/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.526614 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gd9lm_736a2adc-4c77-4a76-8fb3-a2c008cb8b6b/registry-server/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.619650 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:48:53 crc kubenswrapper[4832]: E0312 15:48:53.620028 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.638759 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s8q7d_24ad1ef0-306a-47c0-95bb-7f3a55a471ea/extract-utilities/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.834807 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s8q7d_24ad1ef0-306a-47c0-95bb-7f3a55a471ea/extract-utilities/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.841623 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s8q7d_24ad1ef0-306a-47c0-95bb-7f3a55a471ea/extract-content/0.log" Mar 12 15:48:53 crc kubenswrapper[4832]: I0312 15:48:53.874111 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s8q7d_24ad1ef0-306a-47c0-95bb-7f3a55a471ea/extract-content/0.log" Mar 12 15:48:54 crc kubenswrapper[4832]: I0312 15:48:54.074833 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s8q7d_24ad1ef0-306a-47c0-95bb-7f3a55a471ea/extract-utilities/0.log" Mar 12 15:48:54 crc kubenswrapper[4832]: I0312 15:48:54.086476 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s8q7d_24ad1ef0-306a-47c0-95bb-7f3a55a471ea/extract-content/0.log" Mar 12 15:48:54 crc kubenswrapper[4832]: I0312 15:48:54.580158 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s8q7d_24ad1ef0-306a-47c0-95bb-7f3a55a471ea/registry-server/0.log" Mar 12 15:49:05 crc kubenswrapper[4832]: I0312 15:49:05.620254 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:49:05 crc kubenswrapper[4832]: E0312 15:49:05.620891 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:49:20 crc kubenswrapper[4832]: I0312 15:49:20.619936 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:49:20 crc kubenswrapper[4832]: E0312 15:49:20.620741 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:49:32 crc kubenswrapper[4832]: I0312 15:49:32.635138 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:49:32 crc kubenswrapper[4832]: E0312 15:49:32.635820 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:49:46 crc kubenswrapper[4832]: I0312 15:49:46.620059 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:49:46 crc kubenswrapper[4832]: E0312 15:49:46.621000 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.149230 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555510-wn5ch"] Mar 12 15:50:00 crc kubenswrapper[4832]: E0312 15:50:00.151378 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="registry-server" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.151479 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="registry-server" Mar 12 15:50:00 crc kubenswrapper[4832]: E0312 15:50:00.151584 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddee263-2225-4a47-8361-60b13e70e608" containerName="oc" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.151662 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddee263-2225-4a47-8361-60b13e70e608" containerName="oc" Mar 12 15:50:00 crc kubenswrapper[4832]: E0312 15:50:00.151755 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="extract-content" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.151833 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="extract-content" Mar 12 15:50:00 crc kubenswrapper[4832]: E0312 15:50:00.151913 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="extract-utilities" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.151983 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="extract-utilities" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.152282 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddee263-2225-4a47-8361-60b13e70e608" containerName="oc" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.152407 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1eb9add-a6ce-4ffc-b570-c57c1c148656" containerName="registry-server" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.153557 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.156281 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.156298 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.156828 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.169921 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555510-wn5ch"] Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.267106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl2s4\" (UniqueName: \"kubernetes.io/projected/f8f5617d-6e3a-47f4-851a-039f6bfe3808-kube-api-access-dl2s4\") pod \"auto-csr-approver-29555510-wn5ch\" (UID: \"f8f5617d-6e3a-47f4-851a-039f6bfe3808\") " pod="openshift-infra/auto-csr-approver-29555510-wn5ch" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.370174 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl2s4\" (UniqueName: \"kubernetes.io/projected/f8f5617d-6e3a-47f4-851a-039f6bfe3808-kube-api-access-dl2s4\") pod \"auto-csr-approver-29555510-wn5ch\" (UID: \"f8f5617d-6e3a-47f4-851a-039f6bfe3808\") " pod="openshift-infra/auto-csr-approver-29555510-wn5ch" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.458556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl2s4\" (UniqueName: \"kubernetes.io/projected/f8f5617d-6e3a-47f4-851a-039f6bfe3808-kube-api-access-dl2s4\") pod \"auto-csr-approver-29555510-wn5ch\" (UID: \"f8f5617d-6e3a-47f4-851a-039f6bfe3808\") " pod="openshift-infra/auto-csr-approver-29555510-wn5ch" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.476228 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.620156 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:50:00 crc kubenswrapper[4832]: E0312 15:50:00.620474 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.957189 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555510-wn5ch"] Mar 12 15:50:00 crc kubenswrapper[4832]: W0312 15:50:00.970051 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8f5617d_6e3a_47f4_851a_039f6bfe3808.slice/crio-5fc3e95375a3750b47872cee5228de0b41be5070f5bbbb70319daaf994f1d1df WatchSource:0}: Error finding container 5fc3e95375a3750b47872cee5228de0b41be5070f5bbbb70319daaf994f1d1df: Status 404 returned error can't find the container with id 5fc3e95375a3750b47872cee5228de0b41be5070f5bbbb70319daaf994f1d1df Mar 12 15:50:00 crc kubenswrapper[4832]: I0312 15:50:00.975032 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:50:01 crc kubenswrapper[4832]: I0312 15:50:01.532765 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" event={"ID":"f8f5617d-6e3a-47f4-851a-039f6bfe3808","Type":"ContainerStarted","Data":"5fc3e95375a3750b47872cee5228de0b41be5070f5bbbb70319daaf994f1d1df"} Mar 12 15:50:02 crc kubenswrapper[4832]: I0312 15:50:02.543065 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" event={"ID":"f8f5617d-6e3a-47f4-851a-039f6bfe3808","Type":"ContainerStarted","Data":"3eeb0d2b130bb005de26edbd9e04ba8f0f02dd3d87dac4de81e2c36c9ddec974"} Mar 12 15:50:02 crc kubenswrapper[4832]: I0312 15:50:02.561955 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" podStartSLOduration=1.393642654 podStartE2EDuration="2.561938455s" podCreationTimestamp="2026-03-12 15:50:00 +0000 UTC" firstStartedPulling="2026-03-12 15:50:00.974818971 +0000 UTC m=+3759.618833197" lastFinishedPulling="2026-03-12 15:50:02.143114752 +0000 UTC m=+3760.787128998" observedRunningTime="2026-03-12 15:50:02.560267857 +0000 UTC m=+3761.204282093" watchObservedRunningTime="2026-03-12 15:50:02.561938455 +0000 UTC m=+3761.205952681" Mar 12 15:50:03 crc kubenswrapper[4832]: I0312 15:50:03.553131 4832 generic.go:334] "Generic (PLEG): container finished" podID="f8f5617d-6e3a-47f4-851a-039f6bfe3808" containerID="3eeb0d2b130bb005de26edbd9e04ba8f0f02dd3d87dac4de81e2c36c9ddec974" exitCode=0 Mar 12 15:50:03 crc kubenswrapper[4832]: I0312 15:50:03.553236 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" event={"ID":"f8f5617d-6e3a-47f4-851a-039f6bfe3808","Type":"ContainerDied","Data":"3eeb0d2b130bb005de26edbd9e04ba8f0f02dd3d87dac4de81e2c36c9ddec974"} Mar 12 15:50:04 crc kubenswrapper[4832]: I0312 15:50:04.923733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" Mar 12 15:50:04 crc kubenswrapper[4832]: I0312 15:50:04.969065 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl2s4\" (UniqueName: \"kubernetes.io/projected/f8f5617d-6e3a-47f4-851a-039f6bfe3808-kube-api-access-dl2s4\") pod \"f8f5617d-6e3a-47f4-851a-039f6bfe3808\" (UID: \"f8f5617d-6e3a-47f4-851a-039f6bfe3808\") " Mar 12 15:50:05 crc kubenswrapper[4832]: I0312 15:50:05.004846 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f5617d-6e3a-47f4-851a-039f6bfe3808-kube-api-access-dl2s4" (OuterVolumeSpecName: "kube-api-access-dl2s4") pod "f8f5617d-6e3a-47f4-851a-039f6bfe3808" (UID: "f8f5617d-6e3a-47f4-851a-039f6bfe3808"). InnerVolumeSpecName "kube-api-access-dl2s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:50:05 crc kubenswrapper[4832]: I0312 15:50:05.071756 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl2s4\" (UniqueName: \"kubernetes.io/projected/f8f5617d-6e3a-47f4-851a-039f6bfe3808-kube-api-access-dl2s4\") on node \"crc\" DevicePath \"\"" Mar 12 15:50:05 crc kubenswrapper[4832]: I0312 15:50:05.589336 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" event={"ID":"f8f5617d-6e3a-47f4-851a-039f6bfe3808","Type":"ContainerDied","Data":"5fc3e95375a3750b47872cee5228de0b41be5070f5bbbb70319daaf994f1d1df"} Mar 12 15:50:05 crc kubenswrapper[4832]: I0312 15:50:05.589390 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fc3e95375a3750b47872cee5228de0b41be5070f5bbbb70319daaf994f1d1df" Mar 12 15:50:05 crc kubenswrapper[4832]: I0312 15:50:05.589429 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555510-wn5ch" Mar 12 15:50:05 crc kubenswrapper[4832]: I0312 15:50:05.655491 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555504-tqq42"] Mar 12 15:50:05 crc kubenswrapper[4832]: I0312 15:50:05.670762 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555504-tqq42"] Mar 12 15:50:06 crc kubenswrapper[4832]: I0312 15:50:06.633912 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3477640f-166b-4c95-bb4e-dda23fe29206" path="/var/lib/kubelet/pods/3477640f-166b-4c95-bb4e-dda23fe29206/volumes" Mar 12 15:50:12 crc kubenswrapper[4832]: I0312 15:50:12.634641 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:50:12 crc kubenswrapper[4832]: E0312 15:50:12.635708 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:50:26 crc kubenswrapper[4832]: I0312 15:50:26.620798 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:50:26 crc kubenswrapper[4832]: E0312 15:50:26.622039 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:50:37 crc kubenswrapper[4832]: I0312 15:50:37.936705 4832 generic.go:334] "Generic (PLEG): container finished" podID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerID="c3aa646b371773e2fac4277a42c16631403b65634c46e6bde4bc308e1a92b090" exitCode=0 Mar 12 15:50:37 crc kubenswrapper[4832]: I0312 15:50:37.936762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" event={"ID":"43db85df-6864-49e9-9b60-eb6050c1cb01","Type":"ContainerDied","Data":"c3aa646b371773e2fac4277a42c16631403b65634c46e6bde4bc308e1a92b090"} Mar 12 15:50:37 crc kubenswrapper[4832]: I0312 15:50:37.938069 4832 scope.go:117] "RemoveContainer" containerID="c3aa646b371773e2fac4277a42c16631403b65634c46e6bde4bc308e1a92b090" Mar 12 15:50:38 crc kubenswrapper[4832]: I0312 15:50:38.732662 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rqgtd_must-gather-2wnvd_43db85df-6864-49e9-9b60-eb6050c1cb01/gather/0.log" Mar 12 15:50:41 crc kubenswrapper[4832]: I0312 15:50:41.620849 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:50:41 crc kubenswrapper[4832]: E0312 15:50:41.622110 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:50:46 crc kubenswrapper[4832]: I0312 15:50:46.662986 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rqgtd/must-gather-2wnvd"] Mar 12 15:50:46 crc kubenswrapper[4832]: I0312 15:50:46.663620 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerName="copy" containerID="cri-o://114d78f404e7f547b4b8cb9eb38dec53f5379b736ab1410a1128490926d7a7a8" gracePeriod=2 Mar 12 15:50:46 crc kubenswrapper[4832]: I0312 15:50:46.674063 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rqgtd/must-gather-2wnvd"] Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.058263 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rqgtd_must-gather-2wnvd_43db85df-6864-49e9-9b60-eb6050c1cb01/copy/0.log" Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.058916 4832 generic.go:334] "Generic (PLEG): container finished" podID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerID="114d78f404e7f547b4b8cb9eb38dec53f5379b736ab1410a1128490926d7a7a8" exitCode=143 Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.058970 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e7a37bff2a36f91298c7b6351cb13e210b81bb89a055169416424959d68add" Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.140251 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rqgtd_must-gather-2wnvd_43db85df-6864-49e9-9b60-eb6050c1cb01/copy/0.log" Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.140721 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.221556 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5tfq\" (UniqueName: \"kubernetes.io/projected/43db85df-6864-49e9-9b60-eb6050c1cb01-kube-api-access-k5tfq\") pod \"43db85df-6864-49e9-9b60-eb6050c1cb01\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.221598 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/43db85df-6864-49e9-9b60-eb6050c1cb01-must-gather-output\") pod \"43db85df-6864-49e9-9b60-eb6050c1cb01\" (UID: \"43db85df-6864-49e9-9b60-eb6050c1cb01\") " Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.230615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43db85df-6864-49e9-9b60-eb6050c1cb01-kube-api-access-k5tfq" (OuterVolumeSpecName: "kube-api-access-k5tfq") pod "43db85df-6864-49e9-9b60-eb6050c1cb01" (UID: "43db85df-6864-49e9-9b60-eb6050c1cb01"). InnerVolumeSpecName "kube-api-access-k5tfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.324044 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5tfq\" (UniqueName: \"kubernetes.io/projected/43db85df-6864-49e9-9b60-eb6050c1cb01-kube-api-access-k5tfq\") on node \"crc\" DevicePath \"\"" Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.397054 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43db85df-6864-49e9-9b60-eb6050c1cb01-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "43db85df-6864-49e9-9b60-eb6050c1cb01" (UID: "43db85df-6864-49e9-9b60-eb6050c1cb01"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:50:47 crc kubenswrapper[4832]: I0312 15:50:47.426370 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/43db85df-6864-49e9-9b60-eb6050c1cb01-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 12 15:50:48 crc kubenswrapper[4832]: I0312 15:50:48.065626 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqgtd/must-gather-2wnvd" Mar 12 15:50:48 crc kubenswrapper[4832]: I0312 15:50:48.636333 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" path="/var/lib/kubelet/pods/43db85df-6864-49e9-9b60-eb6050c1cb01/volumes" Mar 12 15:50:53 crc kubenswrapper[4832]: I0312 15:50:53.049328 4832 scope.go:117] "RemoveContainer" containerID="89b8612775d210ae445c852b19b02962f7ab4b8d19eaa870bcdb59888fbe59e9" Mar 12 15:50:54 crc kubenswrapper[4832]: I0312 15:50:54.619811 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:50:54 crc kubenswrapper[4832]: E0312 15:50:54.620782 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:51:05 crc kubenswrapper[4832]: I0312 15:51:05.622923 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:51:05 crc kubenswrapper[4832]: E0312 15:51:05.625403 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:51:17 crc kubenswrapper[4832]: I0312 15:51:17.620974 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:51:17 crc kubenswrapper[4832]: E0312 15:51:17.622245 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.845844 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s8b2v"] Mar 12 15:51:20 crc kubenswrapper[4832]: E0312 15:51:20.847491 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerName="copy" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.847623 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerName="copy" Mar 12 15:51:20 crc kubenswrapper[4832]: E0312 15:51:20.847706 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerName="gather" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.847731 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerName="gather" Mar 12 15:51:20 crc kubenswrapper[4832]: E0312 15:51:20.847779 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f5617d-6e3a-47f4-851a-039f6bfe3808" containerName="oc" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.847799 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f5617d-6e3a-47f4-851a-039f6bfe3808" containerName="oc" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.848250 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerName="copy" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.848298 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="43db85df-6864-49e9-9b60-eb6050c1cb01" containerName="gather" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.848352 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f5617d-6e3a-47f4-851a-039f6bfe3808" containerName="oc" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.851404 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.882148 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8b2v"] Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.949619 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-catalog-content\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.949764 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnvw2\" (UniqueName: \"kubernetes.io/projected/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-kube-api-access-lnvw2\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:20 crc kubenswrapper[4832]: I0312 15:51:20.949838 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-utilities\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.050984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-catalog-content\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.051316 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnvw2\" (UniqueName: \"kubernetes.io/projected/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-kube-api-access-lnvw2\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.051382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-utilities\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.051530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-catalog-content\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.051703 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-utilities\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.077917 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnvw2\" (UniqueName: \"kubernetes.io/projected/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-kube-api-access-lnvw2\") pod \"community-operators-s8b2v\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.192743 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:21 crc kubenswrapper[4832]: I0312 15:51:21.730587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8b2v"] Mar 12 15:51:21 crc kubenswrapper[4832]: W0312 15:51:21.739128 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod162dcc31_89ba_4af5_9c81_b1dbbb08bfc7.slice/crio-a950ad9eb68f6b31d4659671936c44c51bb8bd9695e8a3a34ee09df104b05365 WatchSource:0}: Error finding container a950ad9eb68f6b31d4659671936c44c51bb8bd9695e8a3a34ee09df104b05365: Status 404 returned error can't find the container with id a950ad9eb68f6b31d4659671936c44c51bb8bd9695e8a3a34ee09df104b05365 Mar 12 15:51:22 crc kubenswrapper[4832]: I0312 15:51:22.444899 4832 generic.go:334] "Generic (PLEG): container finished" podID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerID="642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4" exitCode=0 Mar 12 15:51:22 crc kubenswrapper[4832]: I0312 15:51:22.444958 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8b2v" event={"ID":"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7","Type":"ContainerDied","Data":"642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4"} Mar 12 15:51:22 crc kubenswrapper[4832]: I0312 15:51:22.445274 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8b2v" event={"ID":"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7","Type":"ContainerStarted","Data":"a950ad9eb68f6b31d4659671936c44c51bb8bd9695e8a3a34ee09df104b05365"} Mar 12 15:51:23 crc kubenswrapper[4832]: I0312 15:51:23.456464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8b2v" event={"ID":"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7","Type":"ContainerStarted","Data":"69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6"} Mar 12 15:51:25 crc kubenswrapper[4832]: I0312 15:51:25.486359 4832 generic.go:334] "Generic (PLEG): container finished" podID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerID="69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6" exitCode=0 Mar 12 15:51:25 crc kubenswrapper[4832]: I0312 15:51:25.486462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8b2v" event={"ID":"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7","Type":"ContainerDied","Data":"69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6"} Mar 12 15:51:26 crc kubenswrapper[4832]: I0312 15:51:26.503463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8b2v" event={"ID":"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7","Type":"ContainerStarted","Data":"9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98"} Mar 12 15:51:26 crc kubenswrapper[4832]: I0312 15:51:26.551067 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s8b2v" podStartSLOduration=3.092270543 podStartE2EDuration="6.5510426s" podCreationTimestamp="2026-03-12 15:51:20 +0000 UTC" firstStartedPulling="2026-03-12 15:51:22.448677857 +0000 UTC m=+3841.092692103" lastFinishedPulling="2026-03-12 15:51:25.907449894 +0000 UTC m=+3844.551464160" observedRunningTime="2026-03-12 15:51:26.536865605 +0000 UTC m=+3845.180879861" watchObservedRunningTime="2026-03-12 15:51:26.5510426 +0000 UTC m=+3845.195056846" Mar 12 15:51:29 crc kubenswrapper[4832]: I0312 15:51:29.620746 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:51:29 crc kubenswrapper[4832]: E0312 15:51:29.621987 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:51:31 crc kubenswrapper[4832]: I0312 15:51:31.193889 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:31 crc kubenswrapper[4832]: I0312 15:51:31.194383 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:31 crc kubenswrapper[4832]: I0312 15:51:31.278337 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:31 crc kubenswrapper[4832]: I0312 15:51:31.609437 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:31 crc kubenswrapper[4832]: I0312 15:51:31.668373 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s8b2v"] Mar 12 15:51:33 crc kubenswrapper[4832]: I0312 15:51:33.578134 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s8b2v" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="registry-server" containerID="cri-o://9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98" gracePeriod=2 Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.115264 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.233314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-utilities\") pod \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.234014 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-catalog-content\") pod \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.234134 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnvw2\" (UniqueName: \"kubernetes.io/projected/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-kube-api-access-lnvw2\") pod \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\" (UID: \"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7\") " Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.236222 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-utilities" (OuterVolumeSpecName: "utilities") pod "162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" (UID: "162dcc31-89ba-4af5-9c81-b1dbbb08bfc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.248596 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-kube-api-access-lnvw2" (OuterVolumeSpecName: "kube-api-access-lnvw2") pod "162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" (UID: "162dcc31-89ba-4af5-9c81-b1dbbb08bfc7"). InnerVolumeSpecName "kube-api-access-lnvw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.315536 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" (UID: "162dcc31-89ba-4af5-9c81-b1dbbb08bfc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.337036 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.337077 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.337097 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnvw2\" (UniqueName: \"kubernetes.io/projected/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7-kube-api-access-lnvw2\") on node \"crc\" DevicePath \"\"" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.594683 4832 generic.go:334] "Generic (PLEG): container finished" podID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerID="9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98" exitCode=0 Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.594752 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8b2v" event={"ID":"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7","Type":"ContainerDied","Data":"9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98"} Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.594781 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8b2v" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.594840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8b2v" event={"ID":"162dcc31-89ba-4af5-9c81-b1dbbb08bfc7","Type":"ContainerDied","Data":"a950ad9eb68f6b31d4659671936c44c51bb8bd9695e8a3a34ee09df104b05365"} Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.594882 4832 scope.go:117] "RemoveContainer" containerID="9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.659897 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s8b2v"] Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.664591 4832 scope.go:117] "RemoveContainer" containerID="69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.674725 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s8b2v"] Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.699576 4832 scope.go:117] "RemoveContainer" containerID="642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.762779 4832 scope.go:117] "RemoveContainer" containerID="9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98" Mar 12 15:51:34 crc kubenswrapper[4832]: E0312 15:51:34.763282 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98\": container with ID starting with 9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98 not found: ID does not exist" containerID="9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.763320 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98"} err="failed to get container status \"9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98\": rpc error: code = NotFound desc = could not find container \"9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98\": container with ID starting with 9e51fc2859bb17108b2a41936d9825dab2fd55f06e1991aa4859944976510b98 not found: ID does not exist" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.763346 4832 scope.go:117] "RemoveContainer" containerID="69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6" Mar 12 15:51:34 crc kubenswrapper[4832]: E0312 15:51:34.763863 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6\": container with ID starting with 69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6 not found: ID does not exist" containerID="69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.763920 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6"} err="failed to get container status \"69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6\": rpc error: code = NotFound desc = could not find container \"69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6\": container with ID starting with 69e39df1d5f7a3a2a02a3b690d1691a5fd1868b14cc92aac8431279466d260f6 not found: ID does not exist" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.763946 4832 scope.go:117] "RemoveContainer" containerID="642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4" Mar 12 15:51:34 crc kubenswrapper[4832]: E0312 15:51:34.764806 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4\": container with ID starting with 642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4 not found: ID does not exist" containerID="642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4" Mar 12 15:51:34 crc kubenswrapper[4832]: I0312 15:51:34.764853 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4"} err="failed to get container status \"642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4\": rpc error: code = NotFound desc = could not find container \"642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4\": container with ID starting with 642f924112ffe9decf464fec13b7285c47fe8b3dc38114a333a09b49ca2706c4 not found: ID does not exist" Mar 12 15:51:36 crc kubenswrapper[4832]: I0312 15:51:36.640355 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" path="/var/lib/kubelet/pods/162dcc31-89ba-4af5-9c81-b1dbbb08bfc7/volumes" Mar 12 15:51:42 crc kubenswrapper[4832]: I0312 15:51:42.627939 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:51:42 crc kubenswrapper[4832]: E0312 15:51:42.635776 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:51:53 crc kubenswrapper[4832]: I0312 15:51:53.182946 4832 scope.go:117] "RemoveContainer" containerID="c3aa646b371773e2fac4277a42c16631403b65634c46e6bde4bc308e1a92b090" Mar 12 15:51:53 crc kubenswrapper[4832]: I0312 15:51:53.303832 4832 scope.go:117] "RemoveContainer" containerID="114d78f404e7f547b4b8cb9eb38dec53f5379b736ab1410a1128490926d7a7a8" Mar 12 15:51:53 crc kubenswrapper[4832]: I0312 15:51:53.328287 4832 scope.go:117] "RemoveContainer" containerID="3bb162359f32ab8a31e4d75fc1f5d18364474959900276a5b69783a44bcbf32c" Mar 12 15:51:55 crc kubenswrapper[4832]: I0312 15:51:55.620310 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:51:55 crc kubenswrapper[4832]: E0312 15:51:55.623240 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.175851 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555512-jmhfj"] Mar 12 15:52:00 crc kubenswrapper[4832]: E0312 15:52:00.177487 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="extract-utilities" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.177555 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="extract-utilities" Mar 12 15:52:00 crc kubenswrapper[4832]: E0312 15:52:00.177602 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="registry-server" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.177623 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="registry-server" Mar 12 15:52:00 crc kubenswrapper[4832]: E0312 15:52:00.177719 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="extract-content" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.177738 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="extract-content" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.178261 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="162dcc31-89ba-4af5-9c81-b1dbbb08bfc7" containerName="registry-server" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.179788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555512-jmhfj" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.182741 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.183203 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.183772 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.205014 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555512-jmhfj"] Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.322256 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwwb\" (UniqueName: \"kubernetes.io/projected/fce94f51-77bb-4dbc-b78a-54fe3a8e79b3-kube-api-access-9vwwb\") pod \"auto-csr-approver-29555512-jmhfj\" (UID: \"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3\") " pod="openshift-infra/auto-csr-approver-29555512-jmhfj" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.423842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vwwb\" (UniqueName: \"kubernetes.io/projected/fce94f51-77bb-4dbc-b78a-54fe3a8e79b3-kube-api-access-9vwwb\") pod \"auto-csr-approver-29555512-jmhfj\" (UID: \"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3\") " pod="openshift-infra/auto-csr-approver-29555512-jmhfj" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.449136 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vwwb\" (UniqueName: \"kubernetes.io/projected/fce94f51-77bb-4dbc-b78a-54fe3a8e79b3-kube-api-access-9vwwb\") pod \"auto-csr-approver-29555512-jmhfj\" (UID: \"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3\") " pod="openshift-infra/auto-csr-approver-29555512-jmhfj" Mar 12 15:52:00 crc kubenswrapper[4832]: I0312 15:52:00.515884 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555512-jmhfj" Mar 12 15:52:01 crc kubenswrapper[4832]: I0312 15:52:01.020052 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555512-jmhfj"] Mar 12 15:52:01 crc kubenswrapper[4832]: W0312 15:52:01.027407 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfce94f51_77bb_4dbc_b78a_54fe3a8e79b3.slice/crio-b760edb8407acb1ded853ae2fc09cb3c14a806aff9b56a5939586ed0a27b9c6f WatchSource:0}: Error finding container b760edb8407acb1ded853ae2fc09cb3c14a806aff9b56a5939586ed0a27b9c6f: Status 404 returned error can't find the container with id b760edb8407acb1ded853ae2fc09cb3c14a806aff9b56a5939586ed0a27b9c6f Mar 12 15:52:01 crc kubenswrapper[4832]: I0312 15:52:01.908919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555512-jmhfj" event={"ID":"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3","Type":"ContainerStarted","Data":"b760edb8407acb1ded853ae2fc09cb3c14a806aff9b56a5939586ed0a27b9c6f"} Mar 12 15:52:02 crc kubenswrapper[4832]: I0312 15:52:02.919820 4832 generic.go:334] "Generic (PLEG): container finished" podID="fce94f51-77bb-4dbc-b78a-54fe3a8e79b3" containerID="89142ab148287b73d2d1d0d7f3232edda10cb96a9b9ff04bba402c2099b81f21" exitCode=0 Mar 12 15:52:02 crc kubenswrapper[4832]: I0312 15:52:02.919873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555512-jmhfj" event={"ID":"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3","Type":"ContainerDied","Data":"89142ab148287b73d2d1d0d7f3232edda10cb96a9b9ff04bba402c2099b81f21"} Mar 12 15:52:04 crc kubenswrapper[4832]: I0312 15:52:04.374838 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555512-jmhfj" Mar 12 15:52:04 crc kubenswrapper[4832]: I0312 15:52:04.516874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vwwb\" (UniqueName: \"kubernetes.io/projected/fce94f51-77bb-4dbc-b78a-54fe3a8e79b3-kube-api-access-9vwwb\") pod \"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3\" (UID: \"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3\") " Mar 12 15:52:04 crc kubenswrapper[4832]: I0312 15:52:04.524196 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce94f51-77bb-4dbc-b78a-54fe3a8e79b3-kube-api-access-9vwwb" (OuterVolumeSpecName: "kube-api-access-9vwwb") pod "fce94f51-77bb-4dbc-b78a-54fe3a8e79b3" (UID: "fce94f51-77bb-4dbc-b78a-54fe3a8e79b3"). InnerVolumeSpecName "kube-api-access-9vwwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:52:04 crc kubenswrapper[4832]: I0312 15:52:04.620489 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vwwb\" (UniqueName: \"kubernetes.io/projected/fce94f51-77bb-4dbc-b78a-54fe3a8e79b3-kube-api-access-9vwwb\") on node \"crc\" DevicePath \"\"" Mar 12 15:52:04 crc kubenswrapper[4832]: I0312 15:52:04.944940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555512-jmhfj" event={"ID":"fce94f51-77bb-4dbc-b78a-54fe3a8e79b3","Type":"ContainerDied","Data":"b760edb8407acb1ded853ae2fc09cb3c14a806aff9b56a5939586ed0a27b9c6f"} Mar 12 15:52:04 crc kubenswrapper[4832]: I0312 15:52:04.944987 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b760edb8407acb1ded853ae2fc09cb3c14a806aff9b56a5939586ed0a27b9c6f" Mar 12 15:52:04 crc kubenswrapper[4832]: I0312 15:52:04.944998 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555512-jmhfj" Mar 12 15:52:05 crc kubenswrapper[4832]: I0312 15:52:05.496560 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555506-x6jxr"] Mar 12 15:52:05 crc kubenswrapper[4832]: I0312 15:52:05.501067 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555506-x6jxr"] Mar 12 15:52:06 crc kubenswrapper[4832]: I0312 15:52:06.635714 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca28e44e-2492-499c-a91f-f48e4283db6d" path="/var/lib/kubelet/pods/ca28e44e-2492-499c-a91f-f48e4283db6d/volumes" Mar 12 15:52:08 crc kubenswrapper[4832]: I0312 15:52:08.620026 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:52:08 crc kubenswrapper[4832]: E0312 15:52:08.620569 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:52:20 crc kubenswrapper[4832]: I0312 15:52:20.621145 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:52:20 crc kubenswrapper[4832]: E0312 15:52:20.622329 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:52:33 crc kubenswrapper[4832]: I0312 15:52:33.622777 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:52:33 crc kubenswrapper[4832]: E0312 15:52:33.624603 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:52:44 crc kubenswrapper[4832]: I0312 15:52:44.620124 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:52:44 crc kubenswrapper[4832]: E0312 15:52:44.621567 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:52:53 crc kubenswrapper[4832]: I0312 15:52:53.417695 4832 scope.go:117] "RemoveContainer" containerID="89667e7bd610354763bc6e5758e2ad4743e2988b706dfa10a9e57613d486693e" Mar 12 15:52:53 crc kubenswrapper[4832]: I0312 15:52:53.486584 4832 scope.go:117] "RemoveContainer" containerID="d1e917960cb176dcb6a01bfe6be0980dac355a5b4497eddf598573211dd72c85" Mar 12 15:52:58 crc kubenswrapper[4832]: I0312 15:52:58.624343 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:52:58 crc kubenswrapper[4832]: E0312 15:52:58.625203 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:53:10 crc kubenswrapper[4832]: I0312 15:53:10.620439 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:53:10 crc kubenswrapper[4832]: E0312 15:53:10.621567 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:53:24 crc kubenswrapper[4832]: I0312 15:53:24.620629 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:53:24 crc kubenswrapper[4832]: E0312 15:53:24.622027 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kdl9v_openshift-machine-config-operator(8c62aa7e-9fce-4677-b6bc-beb87644af0a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" Mar 12 15:53:35 crc kubenswrapper[4832]: I0312 15:53:35.620283 4832 scope.go:117] "RemoveContainer" containerID="7543b6c4b24dcf5fb5b036723dee7a3e76bfaf5b71c1749ce2a1d15dfec30c85" Mar 12 15:53:35 crc kubenswrapper[4832]: I0312 15:53:35.952421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" event={"ID":"8c62aa7e-9fce-4677-b6bc-beb87644af0a","Type":"ContainerStarted","Data":"2090f5e05e91e3935e82f4eec6f33325eb3289f127e2052a4270ce178a73e45e"} Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.171135 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555514-9mnkm"] Mar 12 15:54:00 crc kubenswrapper[4832]: E0312 15:54:00.172823 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce94f51-77bb-4dbc-b78a-54fe3a8e79b3" containerName="oc" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.172849 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce94f51-77bb-4dbc-b78a-54fe3a8e79b3" containerName="oc" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.173224 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce94f51-77bb-4dbc-b78a-54fe3a8e79b3" containerName="oc" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.174257 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555514-9mnkm" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.177749 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.177860 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.178066 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.192333 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555514-9mnkm"] Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.246098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvss\" (UniqueName: \"kubernetes.io/projected/7a33f2ef-80f0-48ff-a2a7-55496347bc5e-kube-api-access-6pvss\") pod \"auto-csr-approver-29555514-9mnkm\" (UID: \"7a33f2ef-80f0-48ff-a2a7-55496347bc5e\") " pod="openshift-infra/auto-csr-approver-29555514-9mnkm" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.348325 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvss\" (UniqueName: \"kubernetes.io/projected/7a33f2ef-80f0-48ff-a2a7-55496347bc5e-kube-api-access-6pvss\") pod \"auto-csr-approver-29555514-9mnkm\" (UID: \"7a33f2ef-80f0-48ff-a2a7-55496347bc5e\") " pod="openshift-infra/auto-csr-approver-29555514-9mnkm" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.378239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvss\" (UniqueName: \"kubernetes.io/projected/7a33f2ef-80f0-48ff-a2a7-55496347bc5e-kube-api-access-6pvss\") pod \"auto-csr-approver-29555514-9mnkm\" (UID: \"7a33f2ef-80f0-48ff-a2a7-55496347bc5e\") " pod="openshift-infra/auto-csr-approver-29555514-9mnkm" Mar 12 15:54:00 crc kubenswrapper[4832]: I0312 15:54:00.516776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555514-9mnkm" Mar 12 15:54:01 crc kubenswrapper[4832]: I0312 15:54:01.084259 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555514-9mnkm"] Mar 12 15:54:01 crc kubenswrapper[4832]: I0312 15:54:01.225621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555514-9mnkm" event={"ID":"7a33f2ef-80f0-48ff-a2a7-55496347bc5e","Type":"ContainerStarted","Data":"15bb959e17c1a8fc82d973ad1fb592a20c3323bc71dc35a867d4bd2e41e07c81"} Mar 12 15:54:03 crc kubenswrapper[4832]: I0312 15:54:03.252175 4832 generic.go:334] "Generic (PLEG): container finished" podID="7a33f2ef-80f0-48ff-a2a7-55496347bc5e" containerID="59933dfccc2d2f6a69abb4f02fd2e3a93f6fbd5eb3a699df60f94c7bb8f3ed18" exitCode=0 Mar 12 15:54:03 crc kubenswrapper[4832]: I0312 15:54:03.252225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555514-9mnkm" event={"ID":"7a33f2ef-80f0-48ff-a2a7-55496347bc5e","Type":"ContainerDied","Data":"59933dfccc2d2f6a69abb4f02fd2e3a93f6fbd5eb3a699df60f94c7bb8f3ed18"} Mar 12 15:54:04 crc kubenswrapper[4832]: I0312 15:54:04.635985 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555514-9mnkm" Mar 12 15:54:04 crc kubenswrapper[4832]: I0312 15:54:04.744436 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvss\" (UniqueName: \"kubernetes.io/projected/7a33f2ef-80f0-48ff-a2a7-55496347bc5e-kube-api-access-6pvss\") pod \"7a33f2ef-80f0-48ff-a2a7-55496347bc5e\" (UID: \"7a33f2ef-80f0-48ff-a2a7-55496347bc5e\") " Mar 12 15:54:04 crc kubenswrapper[4832]: I0312 15:54:04.752280 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a33f2ef-80f0-48ff-a2a7-55496347bc5e-kube-api-access-6pvss" (OuterVolumeSpecName: "kube-api-access-6pvss") pod "7a33f2ef-80f0-48ff-a2a7-55496347bc5e" (UID: "7a33f2ef-80f0-48ff-a2a7-55496347bc5e"). InnerVolumeSpecName "kube-api-access-6pvss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:54:04 crc kubenswrapper[4832]: I0312 15:54:04.849477 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvss\" (UniqueName: \"kubernetes.io/projected/7a33f2ef-80f0-48ff-a2a7-55496347bc5e-kube-api-access-6pvss\") on node \"crc\" DevicePath \"\"" Mar 12 15:54:05 crc kubenswrapper[4832]: I0312 15:54:05.274204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555514-9mnkm" event={"ID":"7a33f2ef-80f0-48ff-a2a7-55496347bc5e","Type":"ContainerDied","Data":"15bb959e17c1a8fc82d973ad1fb592a20c3323bc71dc35a867d4bd2e41e07c81"} Mar 12 15:54:05 crc kubenswrapper[4832]: I0312 15:54:05.274567 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15bb959e17c1a8fc82d973ad1fb592a20c3323bc71dc35a867d4bd2e41e07c81" Mar 12 15:54:05 crc kubenswrapper[4832]: I0312 15:54:05.274292 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555514-9mnkm" Mar 12 15:54:05 crc kubenswrapper[4832]: I0312 15:54:05.740366 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555508-t678v"] Mar 12 15:54:05 crc kubenswrapper[4832]: I0312 15:54:05.750476 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555508-t678v"] Mar 12 15:54:06 crc kubenswrapper[4832]: I0312 15:54:06.641680 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fddee263-2225-4a47-8361-60b13e70e608" path="/var/lib/kubelet/pods/fddee263-2225-4a47-8361-60b13e70e608/volumes" Mar 12 15:54:53 crc kubenswrapper[4832]: I0312 15:54:53.616741 4832 scope.go:117] "RemoveContainer" containerID="f6705e21acd14eb306141f2ce128b6eded605ec39feab19120151aff176b0e7e" Mar 12 15:55:56 crc kubenswrapper[4832]: I0312 15:55:56.314661 4832 patch_prober.go:28] interesting pod/machine-config-daemon-kdl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:55:56 crc kubenswrapper[4832]: I0312 15:55:56.315345 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kdl9v" podUID="8c62aa7e-9fce-4677-b6bc-beb87644af0a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.176368 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555516-gfg2f"] Mar 12 15:56:00 crc kubenswrapper[4832]: E0312 15:56:00.177379 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a33f2ef-80f0-48ff-a2a7-55496347bc5e" containerName="oc" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.177399 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a33f2ef-80f0-48ff-a2a7-55496347bc5e" containerName="oc" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.177747 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a33f2ef-80f0-48ff-a2a7-55496347bc5e" containerName="oc" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.178753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.181700 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.181998 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-cfcz9" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.183296 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.195609 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555516-gfg2f"] Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.283750 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4k8v\" (UniqueName: \"kubernetes.io/projected/43f46b44-6ea6-417c-b0d3-4abb48e32dbe-kube-api-access-f4k8v\") pod \"auto-csr-approver-29555516-gfg2f\" (UID: \"43f46b44-6ea6-417c-b0d3-4abb48e32dbe\") " pod="openshift-infra/auto-csr-approver-29555516-gfg2f" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.386371 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4k8v\" (UniqueName: \"kubernetes.io/projected/43f46b44-6ea6-417c-b0d3-4abb48e32dbe-kube-api-access-f4k8v\") pod \"auto-csr-approver-29555516-gfg2f\" (UID: \"43f46b44-6ea6-417c-b0d3-4abb48e32dbe\") " pod="openshift-infra/auto-csr-approver-29555516-gfg2f" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.411974 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4k8v\" (UniqueName: \"kubernetes.io/projected/43f46b44-6ea6-417c-b0d3-4abb48e32dbe-kube-api-access-f4k8v\") pod \"auto-csr-approver-29555516-gfg2f\" (UID: \"43f46b44-6ea6-417c-b0d3-4abb48e32dbe\") " pod="openshift-infra/auto-csr-approver-29555516-gfg2f" Mar 12 15:56:00 crc kubenswrapper[4832]: I0312 15:56:00.507843 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" Mar 12 15:56:01 crc kubenswrapper[4832]: I0312 15:56:01.020876 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555516-gfg2f"] Mar 12 15:56:01 crc kubenswrapper[4832]: W0312 15:56:01.023429 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43f46b44_6ea6_417c_b0d3_4abb48e32dbe.slice/crio-feafad18f9858e0f67f4daeed9f6ef18f764cd9fc1947e65880e905beeda5747 WatchSource:0}: Error finding container feafad18f9858e0f67f4daeed9f6ef18f764cd9fc1947e65880e905beeda5747: Status 404 returned error can't find the container with id feafad18f9858e0f67f4daeed9f6ef18f764cd9fc1947e65880e905beeda5747 Mar 12 15:56:01 crc kubenswrapper[4832]: I0312 15:56:01.025832 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:56:01 crc kubenswrapper[4832]: I0312 15:56:01.554881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" event={"ID":"43f46b44-6ea6-417c-b0d3-4abb48e32dbe","Type":"ContainerStarted","Data":"feafad18f9858e0f67f4daeed9f6ef18f764cd9fc1947e65880e905beeda5747"} Mar 12 15:56:02 crc kubenswrapper[4832]: I0312 15:56:02.567848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" event={"ID":"43f46b44-6ea6-417c-b0d3-4abb48e32dbe","Type":"ContainerStarted","Data":"7ca019c7b954a5cfee68769f376bb5205750f8f4bdfc5b276aec3a8ef087d38a"} Mar 12 15:56:02 crc kubenswrapper[4832]: I0312 15:56:02.603593 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" podStartSLOduration=1.6196857119999999 podStartE2EDuration="2.603562133s" podCreationTimestamp="2026-03-12 15:56:00 +0000 UTC" firstStartedPulling="2026-03-12 15:56:01.025137689 +0000 UTC m=+4119.669151945" lastFinishedPulling="2026-03-12 15:56:02.00901409 +0000 UTC m=+4120.653028366" observedRunningTime="2026-03-12 15:56:02.589202134 +0000 UTC m=+4121.233216410" watchObservedRunningTime="2026-03-12 15:56:02.603562133 +0000 UTC m=+4121.247576409" Mar 12 15:56:03 crc kubenswrapper[4832]: I0312 15:56:03.583034 4832 generic.go:334] "Generic (PLEG): container finished" podID="43f46b44-6ea6-417c-b0d3-4abb48e32dbe" containerID="7ca019c7b954a5cfee68769f376bb5205750f8f4bdfc5b276aec3a8ef087d38a" exitCode=0 Mar 12 15:56:03 crc kubenswrapper[4832]: I0312 15:56:03.583117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" event={"ID":"43f46b44-6ea6-417c-b0d3-4abb48e32dbe","Type":"ContainerDied","Data":"7ca019c7b954a5cfee68769f376bb5205750f8f4bdfc5b276aec3a8ef087d38a"} Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.072118 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.181679 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4k8v\" (UniqueName: \"kubernetes.io/projected/43f46b44-6ea6-417c-b0d3-4abb48e32dbe-kube-api-access-f4k8v\") pod \"43f46b44-6ea6-417c-b0d3-4abb48e32dbe\" (UID: \"43f46b44-6ea6-417c-b0d3-4abb48e32dbe\") " Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.191547 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f46b44-6ea6-417c-b0d3-4abb48e32dbe-kube-api-access-f4k8v" (OuterVolumeSpecName: "kube-api-access-f4k8v") pod "43f46b44-6ea6-417c-b0d3-4abb48e32dbe" (UID: "43f46b44-6ea6-417c-b0d3-4abb48e32dbe"). InnerVolumeSpecName "kube-api-access-f4k8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.285979 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4k8v\" (UniqueName: \"kubernetes.io/projected/43f46b44-6ea6-417c-b0d3-4abb48e32dbe-kube-api-access-f4k8v\") on node \"crc\" DevicePath \"\"" Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.606647 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" event={"ID":"43f46b44-6ea6-417c-b0d3-4abb48e32dbe","Type":"ContainerDied","Data":"feafad18f9858e0f67f4daeed9f6ef18f764cd9fc1947e65880e905beeda5747"} Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.606688 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feafad18f9858e0f67f4daeed9f6ef18f764cd9fc1947e65880e905beeda5747" Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.606734 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555516-gfg2f" Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.679363 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555510-wn5ch"] Mar 12 15:56:05 crc kubenswrapper[4832]: I0312 15:56:05.690201 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555510-wn5ch"] Mar 12 15:56:06 crc kubenswrapper[4832]: I0312 15:56:06.632460 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f5617d-6e3a-47f4-851a-039f6bfe3808" path="/var/lib/kubelet/pods/f8f5617d-6e3a-47f4-851a-039f6bfe3808/volumes"